Saturday, March 26, 2011

Yes, Some Questions are Better Left Unasked


We've been told since we were old enough to test our merits and theories that there are no dumb questions. The urgency of this instruction tends to increase not with the caliber of the questioning but with the size of the group. Frowning on any inquisitive form is akin to the inquisition of Oliver Twist and his empty gruel bowl. Nobody wants to be the public silencer of curiosity. No one wants to discourage debate -- at least while the cameras are rolling. That kind of instruction happens more naturally behind closed doors (and minds too ... one insinuates).That said, most public forums for open questioning happen virtually these days. From email group lists to Facebook walls there's a whole lot of prodding and posturing around what we're trying to pursue that we feel more comfortable asking our fellow humans than the sequestered confession chamber we have come to know as the Google search box.

These garden variety crowd-sourcing drills are routine in my firewall where domain experts and junior staff alike are more comfortable with receiving pointers and attachments from peers than plowing through an unvetted pile of search results. Sometimes an uninformed question is so basic that tree-in-the-forest physics kick in: No response is not the same as being ignored, though admittedly it's hard to spot the difference in cyberspace.

One way to avoid the open stares of disbelieving colleagues is to press for currency -- a great face-saving defense by inferring that we know our stuff -- just not the latest stuff. Another is to watch the zealousness of the responders. Are they dispatching their personal stashes of best practices or giving the requester a number to call? Is the responder as secure in their answers as they are in their positions? There is a tendency for junior-level people to over share expertise because: (1) they're eager to jump in, (2) they prefer texting to face-to-face problem-solving, or (3) they need to get with post merger realities of a changing power structure.

But enough about me the responder. How can I make your questions more informed without becoming too pointed? How can they expand and advance existing discussions without becoming too open-ended -- too tenuous to invite followup?

For starters, let the question breath a little. Don't put practitioners on the spot with "is it this" or "is it that?" Reducing all the complexities to a multiple choice outcome tells the gallery you really can't draw the distinction between talking to machines or people. Secondly, how the question gets asked supersedes the topic or our learning objectives. Owning up to where we've been, the walls we hit, and the loops we're trying to cycle through means we're paying our dues and respects -- not to any one obstacle or expert but to the community of practice we're addressing for its collective problem-solving intelligence. As hacker Eric Raymond writes in his online manual for How to Ask Questions the Smart Way, there is more than a subtle difference between demanding “an answer” and requesting collaborators. The presumption is that there is a scarcity of definitive answers and a wealth of lessons waiting to be drawn from a pool of experience.

Another crowdsourcing pleaser is to summarize the responses both as a form of gratitude to the participants and respect for the process. A brief write-up of the investigation also validates the commitments of the community to building know-how, not merely revisiting the same knowledge on our domain of experts. That's because the summary integrates the responses into a shared output. It's the interplay of an unfolding conversation -- not a scripted, one-sided and static one. Unpacking responses is conducive to wikis so long as the formating remains simple enough to foster and contain further problem-solving as the community evolves.

Then there's the powerful allure of motivation: why I'm asking. Divulging one's incentives for knowing will disarm the most overconfident know-it-all. That means not competing for smartest guy-in-the-room. The burden of proof is shifted towards a common purpose around a shared understanding. Of course it will take more than case summaries and deference to practice members for junior-level requesters to pick the brains of the more seasoned practitioners. The purpose of the question is key. It's not that answering a question makes us instant and equal partners in the same outcome. It's the faith in knowing that the wheel will turn. As casually as a requester can speak their inquiring mind, the sincerest way to complete this virtuous loop is through reciprocation.

In sum the most incisive question can lose its smarts if the questioner:

  • asks unblinking yes-or-no questions that require a back story to move forward

  • is not forthcoming about the path that led to their request, and

  • fails to disclose what they hope to gain by involving the larger community.


Conversely, the burden falls on the responder to: (a) trust that the collaboration can run both ways, or (b) rediscover the rapture of learning the same advice they're giving; that giving is its own reward. The teaching becomes the ends as well as the means.

As an online research educator I consider myself lucky to fall inside this second camp.

Any questions?

 



Sunday, March 20, 2011

We're Out of Internet Time

[caption id="attachment_589" align="alignleft" width="300" caption="The sweet spot: No costs in next to no Internet time"]
 

[/caption]

Remember the roaring nineties? E-verything  was e-commerce e-xuberence. A dotcom domain was a license to go public. Actual customers and products were exempted from establishing share prices on a uniquely American vintage of snake oil.


One of the other vestiges of the NASDAQ bubble was the notion of Internet time. This was the tagline update for an economy that never sleeps. No one wanted to have fallen prey to the old familiar PE ratios or even acknowledge that old school thinking had any sway on emerging business models.


The financial services supply chain was no longer tangible. But at least it was still intelligible -- no? Okay, we settled for legible back then in those bulging Internet-based portfolios. Then came the spook show of 2001 and these notions were beaten into weightlessness -- not by regulators but the laws of financial gravity. Only then did the Worldcoms, Enrons, Tycos, etal. spiral away from shareholders, employees, and other interdependent life forms on planet money.


Losing Track of Internet Time


In the Dotcom era, complacency became the new taboo. You could be going nowhere. You could be revving over-funded engines off cliffs of falling cash flows. Just no two-hour lunches, man. The alarm clock had rung and the snooze bar was jammed.  Every day we were told how “hot” companies and “cool” products were intensifying established markets or creating new ones:




  • How hot?

  • How cool?

  • How intense?


It didn't much matter. There were so many winners in this most generous of competitions. Angel investors were flipping start-ups. The Feds sat in a corner. Stagnation was abolished. Companies simply grew or died. Both experiences were nearly instant.


All comers were out for one thing: to please the 'mother-of-all networks' as the nurturing vessel and growth serum. That was no Goodyear Blimp. That was a gaggle of rippling packet switches so enmeshed it eclipsed all television, telephony, and satellite networks combined. Remember all meeting there for the first time? Meeting anywhere else would never be the same.


Who in 1994 could have predicted that within 5 years desktop "searching" would become as cheap and pervasive as mouse pads and screen savers?


What's a screen saver?


The truth is that Internet Time has done the unthinkable. It has frozen our cluttered calendars in their tracks. How do I know? I woke up from my millennial hangover this morning and witnessed a miracle. We now have all the time in the world. I say that because online is no longer a useful distinction for defining offline. There's now only the folks who have severed the signal and the rest of us.



We pay through the nose for free information with calendars that are anything but free.


And how about that most favored time = cost factor ... the biological father of all business metrics? On Internet Time information wants to be free. Our calendars are not free at all but we're willing to spend untrackable hours tackled by our own fruitless searches. The paradox is astonishing. Information from the web is as plentiful as it is free and we still pay through the nose. How?


All-you-can-drink access has gone from flirtation, to utility, to civil liberty. So too the search engine as social moderator means that we only pay cash for pushing merchandise or pulling it in. That leaves a whole lot of intangible inventory around shopping for ideas, buying arguments and paying with our limited attentions as well as wallets.  But it's easier to spend all that unlimited attention on people-watching than self-educating and acting on what we learn.


The last time I set my alarm clock I never heard it go off. Or perhaps it had never stopped ringing? Next time you research, Google like you paid for it.




[caption id="attachment_587" align="alignright" width="240" caption="Last week's kickoff of Online Investigations for Pioneer Valley Professionals"][/caption]

Saturday, March 5, 2011

The Art of Crafting Natural Intelligence

[caption id="attachment_555" align="alignleft" width="190" caption=""When you use more than 5% of your brain, you don't want to be on Earth.""][/caption]

I've probably teared up more at an unfair hockey fight and I've had more emotionally-engulfing movie goings. But as far as a life philosophy that plays out on screen, no self-contained cinematic mythology holds my candles quite like Albert Brooks' Defending Your Life.

For most of the story Brooks' day of judgement is about to play out in the purgatorial trappings of a Disneyesque  lodging and office complex. Is the protagonist to advance on the eternal enlightenment path to some higher plain? Will he shuffle back on the next tram for a return date with "the little brains" on earth? That swipe at us live inhabitants is a line delivered by Rip Torn, Brooks' defense attorney who testifies to a 53% utilization of his own cranial capacity. Us little brains use 2-3% -- the remainder of our mind-shafts are crowded out by lethargy and fear.

When One Framework is Worth a Thousand Taxonomies

I've wondered what fears could be confronted and ultimately shed so that I could soar perhaps from 2-3% up to 5-6?  In that spirit I've recently stumbled across a framework called Bloom's Taxonomy. Like the defense lawyer slam at our low-performing mental capacities and fear-mongering, Bloom said that 80-90% of our highest brain function in the lowest realm of sense-making. He calls this "knowledge." Knowledge is accessed through the following retention portals:

remembering, memorizing, recognizing, recalling identification, recalling information, who, what, where, when, how, describing

Kinda oafish, no? It's deciphering 101. It's on or it's off. X=Y or fuggedaboutit.

The pattern-matching of keywords is not the face that launched a thousand ships but the probability gag that seated a thousand monkeys at their typewriters in order to write the great American novel or the great American Internet start-up -- what ever cashes out higher.  Just Ask Jeeves! These are the well-trodden grounds of that cloistered chamber you and I have come to know as web search. Its premise is still tuned to exact match good enough-ness. That's because we can be sold nouns even more easily than the notion our mental blanks are being filled in my omnipotent language engineers. We frugal consumers cave to deals on things -- not to actions about ideas. Nouns are the merchandise -- not the verbs that help us to backorder our understanding of what we actually do with our bill of goods. Unless we're potential suspects in a case, no one is interested in our trail 'o stuff -- unless they can sell it to us again.

The next order of mental processing is to isolate noun phrases from their predicates. That means getting the search engine to distinguish actors from their actions, reducing outcomes to a range of questions we're ready to answer -- or at least lower our surprise should they arise. That kind of conditional logic exists in our mental reflexes whether we've had our morning shower or coffee.

It's interesting that in the pecking order of brain function the inverted pyramid of journalism ranks somewhere in the custodial closet of the ivy-coated shrines of higher learning. Not incidentally these are the unremarkable terms on which IBM's Watson, the question answering machine, beat its human Jeopardy contestants to the buzzer. It took a fact base so bottomless it would turn baseless in the gear shafts of the most fervently applied quiz show savant. Watson's algorithmic swagger chewed through mounds of trivia like a smoldering ash heap of documentation fertilizer.

Elementary School My Dear Watson

The conquest prompted one of the IBM partisans to reflect in the New York Times on finding Watson more meaningful work:
“I have been in medical education for 40 years and we’re still a very memory-based curriculum,” said Dr. Herbert Chase, a professor of clinical medicine at Columbia University... The power of Watson- like tools will cause us to reconsider what it is we want students to do.”

At the same time Watson's next gig as a physician's assistant begs a more immediate question: how do we humans need raise our learning games to Bloom's next levels of comprehension, application, analysis and synthesis? How do we aid and abet the healthy transfer of between us inquiring pea brains?

Knowing a lot about an academic discipline is at best, tangential to teaching it. Having a natural understanding of a subject can be an unnatural fit for passing that understanding along to others. Assuming that academics are better at publishing papers and attending conferences than in educating students, the question falls to the insatiable learners among us: how do we teach ourselves on a level beyond the aspirations of Watson's parents? How do we convince supple, young minds that a healthy dose of skepticism about humans is only the first of a storehouse of rational and instinctive reasons to doubt the merits and intentions of question answering machines?

The current cover story of the Atlantic Monthly offers up Mind Versus Machine. Here science writer Brian Christian serves in the oppositional role of the two Jeopardy adversaries to Watson. The objective of the annual Turing Test is for AI ("artificial intelligence") programmers to convince a sequestered panel via screen text that a machine could out-human its creator in a range of topics spanning from "celebrity gossip" to "heavy-duty philosophy." The advice Christian was given when cramming for this contest?

"Be yourself."

Gee, and I thought I knew how to body surf with the more cryptic sharks!

Five minutes of IM messages later Christian was crowned the winner of the Most Human Human Award -- chiefly for two reasons:

  1. His dominating volleys (he's not waiting on Alex Trebek to pounce, pry, or provocate)

  2. His insights into how the bottom feeder knowledge spoon-fed to his AI adversary highlights natural human intelligence in the experiential realm:


One of my best friends was a barista in high school. Over the course of a day, she would make countless subtle adjustments to the espresso being made, to account for everything from the freshness of the beans to the temperature of the machine to the barometric pressure’s effect on the steam volume, meanwhile manipulating the machine with an octopus’s dexterity and bantering with all manner of customers on whatever topics came up. Then she went to college and landed her first “real” job: rigidly procedural data entry. She thought longingly back to her barista days—when her job actually made demands of her intelligence.

That's a lesson well worth reteaching ourselves the next time we find ourselves needing to justify more question/answer sessions scheduled in the upper eschelons of Bloom's taxonomy.
Bookmark and Share

About attentionSpin

My photo
attentionSpin is a consulting practice formed in 1990 to create, automate and apply a universal scoring system (“The Biggest Picture”) to brands, celebrities, events and policy issues in the public eye. In the Biggest Picture, attentionSpin applies the principles of market research to the process of media analytics to score the volume and nature of media coverage. The explanatory power of this research model: 1. Allows practitioners to understand the requirements for managing the quality of attention they receive 2. Shows influencers the level of authority they hold in forums where companies, office-seekers, celebrities and experts sell their visions, opinions and skills 3. Creates meaningful standards for measuring the success and failure of campaigns and their connection to marketable assets.