RSS
 

Arquivo para a ‘Search engines’ Categoria

The Web 4.0 emerges?

31 Oct

The initial impulse of Tim Berners-Lee to create in the mid-90´s aoutro protocol on the Internet (Web and Internet are different) was to spread more quickly scientific information, then we can say it was a Web-centered information.
The Web quickly became popular, then the growth of concern for the Semantic Web has Berners-Lee, James Hendler and Ora Lassila published the inaugural paper emantic Web: new form of Web content that is meaningful to computers will unleash a revolution of new possibilities further development there was designed as knowledge representation, ontologies, intelligent agents and finally an “evolution of knowledge.”
Web 2.0 had the initial feature interactivity (O’Reilly, 2005) where users become more free to interact in web pages and can tag, comment and share documents found online.
The article pointed the way of ontologies as a way “natural” for the development and add meaning to information in the Semantic Web, with methodologies from the Artificial Intelligence, which in the eyes of James Hendler (Web 3.0) went through a “winter” creative.
But three integrated tools just indicating a new path: ontologies helped build simple organization called knowledge schemes (SKOS – Simple Organization of Knowledge System), a database for consultation, with a language called SPARQL and what was already basic Semantic Web, which was the RDF (Resource description Framework) in its simple descriptive language: XML.
The first major project was the DBpedia, a database proposed by the Free University of Berlin and the University of Leipzig in collaboration with OpenLink Software project in 2007, which was structured around the Wikipedia, using 3.4 billion of concepts to form 2:46 RDF triples (resource, property and value) or more simply subject-predicate-object, indicating a semantic relationship.
There are several types of Intelligent Agents in development, little or no use “intelligence” of Web 3.0, there will be in the future new developments? We pointed out in a recent article Semantic Scholar Tool Paul Allen Foundation, but also the connection to the Web 3.0 (projects related to linked data) is not clear.
2016 definitely has not been the year of the Smart Web, or if you want the Web 4.0, but we are approaching, personal assistants (Siri, Cortana, the “M” of Facebook), home automation (Apple Homekit, Nest), recognition image and driverless cars are right there around the corner.
Home automation is the home smart features, this field AI grows fast.

 

The internet will disappear, says head of Google

27 Jan

EricSchmidtNot what you’re thinking, but the fact that it will become in such a way that this will be impossible to connect to something without using it, said the head of Google Eric Schmidt last Thursday in Davos, Switzerland, where they held the World Economic Forum.

Asked about the future of the Internet replied: “I will answer very simply that the internet will disappear,” second video made available by the network of US television CNBC.

But I did not mean by this that the Internet can follow the path of photographic film and floppy disks, but the understanding Schmidt is that the network will be in a way so present in our daily lives that will be inescapable.

May be present in a person’s life the moment he is born, in every moment of a person’s life, his family life, medical, student and various social activities, of course with the law of “forgetting” which is the ability to disappear with unwanted record.

But the problem of privacy remains, how to ensure that data is not lost and falling into the hands that make misuse, here’s a great problem to be solved

 

(Português) FIFA não controlou ingressos

01 Jul

Sorry, this entry is only available in Brazilian Portuguese.

 

(Português) Novidades e papelão no Google I/O

27 Jun

Sorry, this entry is only available in Brazilian Portuguese.

 

(Português) Google vai as compras de novo

20 May

Sorry, this entry is only available in Brazilian Portuguese.

 

Realities and fantasies of Web 3.0

14 Nov

In November 2006 , John Markoff wrote in the New York Times , using the term Web 3.0 , saying that she would find Web3.0new ways of mining human intelligence : ” From the billions of documents that form the World Wide Web (WWW) and the links that bind them, computer scientists and a growing group of new businesses … ” (see New York Times).

Definitions vary enough from those who think the customization features to the Semantic Web and barrel , from costumes like Conrad Wolfram thinks that Web 3.0 will be the place where ” the computer will generate new information ” , even pessimistic as Andrew Keen ( the Cult of the Amateur) you see in Web 3.0 a return to experts and authorities , calling it ” unrealisable abstraction ” , the idea of ​​connecting and organizing information on the Web

Consider a text actually founding text of James Handler , published in IEEE Computer January 2009 : ” Web 3.0 emerging ” on to explain that after countless laps around the Semantic Web technology has finally found that can help her accomplish – themselves.

The article explains the technology integrated into the emerging Semantic Web are already starting to produce results, from basic applications using RDF description ( in the description of resources, link data from multiple Web sites using a standard language SQL , the SPARQL query that RDF to connections that are ready in XML or OWL ontologies .

Far from being utopian , so the scenario of the Semantic Web is now real and results .

 

Big Data and Libraries

21 Aug

Technology Data Big Data is poised to revolutionize all aspects of human life and culture BigData3as people collect and analyze large volumes of data to predict behavior, problem solving, safety, and numerous other applications, is what ensures the site Christian Science Monitor.

The generation of large amounts of data is being driven by the increasing digitization of everyday activities and dependence on electronic devices of people who leave “fingerprints” concept that can be extended to trace “information”, since any object in any state conservation may contain “implicit” that is not yet in a suitable format.

The site CSMonitor cites a large data project which is a remarkable effort by the Library of Congress to archive millions of tweets per day, which can cost a lot of money for its historical value.

.
One example cited is the work of Richard Rothman, a professor at Johns Hopkins University in Baltimore, fundamental save lives.

The Centers for Disease Control and Prevention (CDC) in Atlanta predict flu outbreaks, and does so through the reports from hospitals.

But it took weeks, in 2009, appeared a study where researchers could predict outbreaks much faster through the analysis of millions of Web searches, queries made ​​as “My son is sick” and could learn a flu outbreak long before the CDC knew the reports of hospitals.

But the technologies of large volumes of data also has a boundary claim, in which technology is perceived potential distruir privacy, encourage inequality and promote government surveillance of citizens or others in the name of national security, how to reconcile these two trends ?

 

Google makes internet failure fall 40%

20 Aug

In last Friday between the hours of 20h37 and 20.48 (GMT), all Google services suffered an outage.: Gmail, Drive, Maps and clear the search, the company said in a statement that lasted “between one and five minutes .

Google said on your page that, during the period of interruption, “50% to 70% of requests Google received” error messages, but the service was corrected after four minutes and was restored to most users on a minute.

According to the company GoSquared there was a 40% drop in global internet traffic thatGoSquare night and studied showed that, after the fall of a few minutes, internet traffic soared logos after restoration.

During the fault, according to the Google page, “50% to 70% of requests Google received error messages”, but did not report the source of the failure.

The failure showed how the internet is still fragile and the possibility of even temporary damage is not a fallacy.

 

Altavista search engine pioneer is disabled

03 Jul

Next Monday (08/07) the Altavista search engineAltavistaUS pioneer with 18 years of existence is turned off, it has  mainpage in USA and has been very popular.

They were popular in the 90s also Lycos, Infoseek and Yahoo.

It was created when the Web was born and the Internet had in twenty in 1995, but changed owners several times, the creator Digital (DEC, Digital Equipment Corporation) was purchased by Compaq in 1998, when it was bought by HP changed hands.

HP sold vovo search for Oberture, which in the same year was bought by Yahoo in 2003.
With this the search giant’s empire grows stronger, though not lacking critical and semantics, relevance and volume of Google searches.

 

O risco científico e o "Big Data"

09 Apr

As grandes pesquisas em computação estão sevoltando para grandes volumes de dados, o chamado Big Data, conforme notícia do New York Times.

A Universidade de Columbia, para “debutar” seu novo Instituto de Ciências e Engenharia de Dados, realizou na última sexta-feira (05/04) um simpósio de um dia inteiro intitulado “Do Big Data às Big Ideias”.

O instituto é uma junção de centros interdisciplinares, para a segurança cibernética, análises financeiras, análises de saúde, novas mídias e cidades inteligentes.

Diversas análises sobre conjunto de tecnologias chamado Big Data, com novos dados e novas ferramentas de inteligência artificial, que poderão realmente transformar as indústrias, como novas capacidades de previsão.

O simpósio, “Do Big Data Para Big Ideas”, foi, principalmente, uma celebração da promessa da tecnologia nos campos de cuidados de saúde ao transporte, com apresentações de Columbia professores e cientistas da computação de empresas como Google, Facebook, Microsoft e Bloomberg.

Os perigos de privacidade e vigilância de Big Data também surgiram de passagem, mas durante uma seção de perguntas e respostas de um painel, o oficial de informações da Google, Ben Fried, expressou um receio. “Minha preocupação é que a tecnologia está muito à frente da sociedade”, disse Fried disse. “Há perigo”, ele sugeriu, “que apenas uma elite técnica entender Big Data e suas implicações, com o risco de uma tecnologia de fuga ou uma rejeição do público”.