Cyberspace unraveled: Enter depths of the deep web

Websites of the week

The vast amount of information on
the web can be exceedingly daunting; making sense of it is a taxing process
indeed. This week we’ve had a look at some research sites that might just help
clarify matters.

visualeconomics.com

The subheading says it all, and
says nothing: ‘unravelling complexities in financial data’. But that’s not even
half the story. The genius of Visual Economics is that it takes dry figures and
wraps them in groovy graphics. The site therefore performs the mighty feat of
making them not just palatable but understandable to the average Joe. Which, I
guess, is me. An example is a superb graphic entitled ‘all the things BP could
buy with the money lost from the oil spill’. Somehow, the visuals are far more
arresting than mere prose could be as it stacks up the costs of various items,
software companies and ice cream sandwiches that BP could afford if it hadn’t
lost so much dosh. Hard to explain; easy to use, like all the best sites.

completeplanet.com

The web as we know it holds 167
terabytes of information, which is a vast amount whichever way you look at it.
But here’s the thing: most of the information in cyberspace is not searchable
by most traditional search engines. Searching the Internet is like dragging a
net across the surface of the ocean, according to experts – there’s plenty fish
but larger prey lurks in the deep. Actually, the majority of the web’s content
is dynamic pages, unlinked content, private or password-protected data
resources and non-standard file formats. This deep web content is estimated at
7,500 terabytes of data – or 550 billion documents. It’s also called the
invisible web but no longer – completeplanet is one of an increasing number of
sites capable of identifying and collating these previously hidden databases to
make them searchable. As research tools go, this is a cracker.

news.google.com/archivesearch

And just as we decide that poor old
Google isn’t delving deep enough into the web, back it comes with Google News
Archive. This is a historical database of newspapers going back 200 years – not
everything’s on there but maybe one day it will be. Given the web’s propensity
for constant updating, it’s an excellent way to cross-check content of times
past. We checked out the NY Times reports of the 1903 hurricane for starters.
Some of the links click through to paywalls depending on the policy of the
newspaper in question, thus somewhat mitigating the perennial discussion about
Google’s business model being built on linking to other people’s copyright
content.

wikipedia.com

Yes, it’s a bit of an obvious one.
And, yes, we know that there’s historically been suspicion about the accuracy of
Wikipedia’s information. In case you’ve been living on Planet Zog for the last
ten years, Wikipedia is a free, open-access encyclopaedia which anyone can
write, edit or contribute to. Articles are appraised, approved or commented on
by the voluntary editing team. In contrast to traditional encyclopaedias,
Wikipedia can be instantly updated as new information comes in. However, there
remains a shortage of reliable statistics as to the relative accuracy of the
site which makes it essential to check facts out from other sources including
Encyclopaedia Britannica, for example.

compasscayman.com

Of course, for news this should
always be the first stop. Did you know that you can also read e-versions of the
Hurricane Supplement, Cayman Financial Review, Inside Out, Key to Cayman,
Observer on Sunday, The Chamber, The Journal and What’s Hot online? Good huh?

0
0

NO COMMENTS