hezseone1x7s
KLASA B
Dołączył: 28 Mar 2011
Posty: 48
Przeczytał: 0 tematów
Ostrzeżeń: 0/5 Skąd: England
|
Wysłany: Sob 3:53, 07 Maj 2011 |
|
|
re are some 800 million pages filled with information when it comes to the World Wide Web but when people use Internet search engines they are only able to approach about half of these pages based on a new study by a team of computer scientists. For search engines, they are falling short when it comes to indexing the Web. Owned by a fixed for computer and communications is 1 institute.
In a alike study done at the end of 1997,[link widoczny dla zalogowanych], the researchers base six top search engines collectively covered 60 percent of the Web, and the best engine buffet approximately a third of entire sites. Coming from a well-known magazine final February was a report stating that what was found only comprised 42 percentage of all sites in a test of 11 altitude search engines no apt advert there was no a unattached program namely was proficient to cover more than approximately 16 percent of the Web.
Taking memorandum of what agree was made when the Web came out, it was to equalize access to information but what the search engines are act is indexing the more fashionable sites that have more links to them and this causes less visibility for the websites that may be carrying new and quality information.
More floor ought be covered when it comes to Internet information and content for the first estimate resulted to about 320 million pages but just 14 months after they found that the estimate ought have been double this initial amount of pages. The Web lonely has 6 trillion bytes of information but there are 20 trillion bytes from the library of congress. About 3 million servers with 289 pages per server are available publicly and this came from the irregular surfing exercise of 2,500 Web sites done by researchers.
When it comes to the web the quantity of information could be larger because it is feasible that just a few sites may have millions of pages. There were several tests done on the waiters and what they got from it was that 83 percent of them embodied advertisement content corporation Web pages and inventories, 2 percent had pornographic content,[link widoczny dla zalogowanych], 2 percent were personal Web pages, just about 3 percent had message ashore health, and about 6 percent had message on science and schooling. It is not because of the volume but the techniques utilized by search engines that make so much of the Web hard to find.
The two cardinal means used by search providers while seeing because pages include user registration for well for following links to new pages. The exertions of search engines have resulted into the institution of a biased example of the Web for they find and index pages that have more correlates to them as they emulate links to new pages. In this case, the problem namely not about having a absence of the competence to do the indexing, the problem namely when resources are made to have additional uses for users including invaluable services such as free email for instance.
When it comes to the simple information requests people make this is really the cause backward why they fail to watch how many they are missing according to a search engine specialist. When it comes to this imbalance in cataloguing it is anticipated to continue for a few more years and this is because of the creation of information content by peoples to be posted on new sites being much slower when compared with the rate of increase in microcomputer resources.
Post został pochwalony 0 razy
|
|