Computer-Assisted Serendipity

While I think we naturally conclude the explosion of information in the medical research world is a good thing, there are of course challenges. The problem is compounded when you consider the information both inside and outside an organization.

It’s always exciting to see advances involving things like big data and semantic web applied to medical research. Supplementing or enhancing the human researcher, not replacing them, simply described as “computer-assisted serendipity” in this interesting article describing work at Oak Ridge National Laboratory focused on literature-based discovery, is worth a look.

A side effect of this information explosion, however, is the fragmentation of knowledge. With thousands of new articles being published by medical journals every day, developments that could inform and add context to medicine’s global body of knowledge often go unnoticed.

Uncovering these overlooked gaps is the primary objective of literature-based discovery, a practice that seeks to connect existing knowledge. The advent of online databases and advanced search techniques has aided this pursuit, but existing methods still lean heavily on researchers’ intuition and chance discovery. Better tools could help uncover previously unrecognized relationships, such as the link between a gene and a disease, a drug and a side effect, or an individual’s environment and risk of developing cancer.

(Source)

Exabyte Scale of Genomics Data (and cat videos)

DNA, Image Source: http://www.publicdomainpictures.net/view-image.php?image=42718&picture=dna
DNA, Image Source

Its no surprise that genomics represents a terrific big data challenge, but noting that its data has doubled every seven months over the last ten years is remarkable given how the field is poised to really explode in the coming years.

This article points out the comparison with astronomy and social media:

The authors estimate that the genomics information so far, from sequencing different organisms and a number of humans, has produced data on the petabyte scale (a petabyte is a million gigabytes). However, over the last decade, genomic sequencing data doubled about every seven months, and will grow at an even faster rate as personal genome sequencing becomes more widespread. The researchers estimate that by 2025, genomics data will explode to the exabyte scale – billions of gigabytes. This surpasses even YouTube, the current title holder among the domains studied for most data stored.

Frankly, it is refreshing to see such a valuable area of study surpassing a repository of countless cat videos as a leading data management problem in our society.

(Source)

Knowledge Management is Dead. Long Live Knowledge Management.

Clearly a title like “Whatever Happened to Knowledge Management?” is going to catch my eye. In this WSJ piece, Thomas Davenport sheds some light on the present state of affairs for KM, and touches on some interesting points about SharePoint:

The technology that organizations wanted to employ was Microsoft’s SharePoint. There were several generations of KM technology—remember Lotus Notes, for example?—but over time the dominant system became SharePoint. It’s not a bad technology by any means, but Microsoft didn’t market it very effectively and didn’t market KM at all.

and something quite prevalent in my world (you may have heard of this “big data” thing):

KM never incorporated knowledge derived from data and analytics. I tried to get my knowledge management friends to incorporate analytical insights into their worlds, but most had an antipathy to that topic. It seems that in this world you either like text or you like numbers, and few people like both. I shifted into focusing on analytics and Big Data, but few of the KM crowd joined me.

In my view, one thing is certain: there is tremendous value locked in the heads of employees, hiding in content of all types, and waiting to be found in large data sets.

Enterprise tools of all kinds, from content management to search to analytics, are continuing to evolve. The increasing demands of global competition are driving a more collaborative workforce.

Regardless of wether we continue to label efforts to unlock that value as knowledge management, they will remain important.

Long live knowledge management.

Falling Asleep to Statistics

While following a thread about Bid Data, I came across this interview with Sir David Cox, and loved this gem about problem solving (Source):

There is a well-established literature in mathematics that people who thought about a problem and do not know how to solve it, go to bed thinking about it and wake up the next morning with a solution. It’s not easily explicable but if you’re wide awake, you perhaps argue down the conventional lines of argument but what you need to do is something a bit crazy which you’re more likely to do if you’re half-awake or asleep. Presumably that’s the explanation!

Suddenly I feel a little better about falling asleep to statistics back in college.

Firms Find Ways to Cut Big-Data Costs

Interesting to read in this WSJ piece that corporate data projects are expected to double in four years. (Source)

As large companies collect, analyze and store increasing quantities of information, the expense of adding servers, hard drives and other equipment is threatening to crimp their big-data plans. Indeed, hardware sales related to corporate-data projects are expected to more than double to $15.7 billion in 2017 from $7.16 billion last year, according to Wikibon, a Marlborough, Mass., research organization.

Also how Riot Games is using Facebook’s Open Compute:

For example, Riot Games might be able to buy a commercial enterprise server, after discounts, for roughly $4,000. A comparable server bought wholesale and equipped with Open Compute software might run about $2,000, according to Mr. Williams.

More details on the barebones server design are on the Open Compute Project site.