RSS
 

Arquivo para a ‘Search engines’ Categoria

Fake news with few days

12 Feb

False, malicious or biased news are old, we have already mentioned the allegations made by Karl Kraus, also news that actress Rita Hayworth (the stage name of Margarita Carmen Cansino, famous in the 50´s and 60´s), who would have lived another two years , or the misleading advertisement that Nike would be giving shirts of the Brazilian national team.
Now there is software developed by the research company Fraunhoffer-Gesellschaft, in Germany that has developed a system that automatically analyzes post of social media and filters false news and misinformation, we can predict a promising future.
It is worth mentioning that this was thanks to the new technologies, the tool does a machine learning that filters the news and through learning (in the sense of algorithms per machine) analyzes contents and metadata, verifying user interaction and optimizes results in real time.
The tool also checks the amount of data (viralization processes), with sending data graphs, frequency and follower networks.
Ulrich Schade, according to the Fraunhofer website, said: “Our software can be customized and trained to meet the needs of any customer. Our software can be customized and trained to meet the needs of any customer. For public agencies, it can be a useful early warning system.”
Metadata are used as markers, thus allowing fake postmarking, that is, it plays a crucial role in differentiating between authentic sources of information and false news.
So if a site with a certain frequency of posts is made, how often and how a tweet is scheduled and at what time? The time of a post can be very revealing, as well as the frequency of the tweet and the followers.
It should also reveal the country and the time zone of the originator of the news for its correct identification and location, therefore the hours are essential.
A high frequency of sending suggests bots, which increases the probability of a false news, can be easily detected and can signal a fake.
Social bots generally send their links to a large number of users, and this is an example of how to spread the uncertainty among the public, so never give in.
Connections and account followers can also be fertile ground for analysts, although well-intentioned people use it, the chance to be a fake is great, and now a tool can detect it, fake days are counted.

 

The Web 4.0 emerges?

31 Oct

The initial impulse of Tim Berners-Lee to create in the mid-90´s aoutro protocol on the Internet (Web and Internet are different) was to spread more quickly scientific information, then we can say it was a Web-centered information.
The Web quickly became popular, then the growth of concern for the Semantic Web has Berners-Lee, James Hendler and Ora Lassila published the inaugural paper emantic Web: new form of Web content that is meaningful to computers will unleash a revolution of new possibilities further development there was designed as knowledge representation, ontologies, intelligent agents and finally an “evolution of knowledge.”
Web 2.0 had the initial feature interactivity (O’Reilly, 2005) where users become more free to interact in web pages and can tag, comment and share documents found online.
The article pointed the way of ontologies as a way “natural” for the development and add meaning to information in the Semantic Web, with methodologies from the Artificial Intelligence, which in the eyes of James Hendler (Web 3.0) went through a “winter” creative.
But three integrated tools just indicating a new path: ontologies helped build simple organization called knowledge schemes (SKOS – Simple Organization of Knowledge System), a database for consultation, with a language called SPARQL and what was already basic Semantic Web, which was the RDF (Resource description Framework) in its simple descriptive language: XML.
The first major project was the DBpedia, a database proposed by the Free University of Berlin and the University of Leipzig in collaboration with OpenLink Software project in 2007, which was structured around the Wikipedia, using 3.4 billion of concepts to form 2:46 RDF triples (resource, property and value) or more simply subject-predicate-object, indicating a semantic relationship.
There are several types of Intelligent Agents in development, little or no use “intelligence” of Web 3.0, there will be in the future new developments? We pointed out in a recent article Semantic Scholar Tool Paul Allen Foundation, but also the connection to the Web 3.0 (projects related to linked data) is not clear.
2016 definitely has not been the year of the Smart Web, or if you want the Web 4.0, but we are approaching, personal assistants (Siri, Cortana, the “M” of Facebook), home automation (Apple Homekit, Nest), recognition image and driverless cars are right there around the corner.
Home automation is the home smart features, this field AI grows fast.

 

The internet will disappear, says head of Google

27 Jan

EricSchmidtNot what you’re thinking, but the fact that it will become in such a way that this will be impossible to connect to something without using it, said the head of Google Eric Schmidt last Thursday in Davos, Switzerland, where they held the World Economic Forum.

Asked about the future of the Internet replied: “I will answer very simply that the internet will disappear,” second video made available by the network of US television CNBC.

But I did not mean by this that the Internet can follow the path of photographic film and floppy disks, but the understanding Schmidt is that the network will be in a way so present in our daily lives that will be inescapable.

May be present in a person’s life the moment he is born, in every moment of a person’s life, his family life, medical, student and various social activities, of course with the law of “forgetting” which is the ability to disappear with unwanted record.

But the problem of privacy remains, how to ensure that data is not lost and falling into the hands that make misuse, here’s a great problem to be solved

 

(Português) FIFA não controlou ingressos

01 Jul

Sorry, this entry is only available in Brazilian Portuguese.

 

(Português) Novidades e papelão no Google I/O

27 Jun

Sorry, this entry is only available in Brazilian Portuguese.

 

(Português) Google vai as compras de novo

20 May

Sorry, this entry is only available in Brazilian Portuguese.

 

Realities and fantasies of Web 3.0

14 Nov

In November 2006 , John Markoff wrote in the New York Times , using the term Web 3.0 , saying that she would find Web3.0new ways of mining human intelligence : ” From the billions of documents that form the World Wide Web (WWW) and the links that bind them, computer scientists and a growing group of new businesses … ” (see New York Times).

Definitions vary enough from those who think the customization features to the Semantic Web and barrel , from costumes like Conrad Wolfram thinks that Web 3.0 will be the place where ” the computer will generate new information ” , even pessimistic as Andrew Keen ( the Cult of the Amateur) you see in Web 3.0 a return to experts and authorities , calling it ” unrealisable abstraction ” , the idea of ​​connecting and organizing information on the Web

Consider a text actually founding text of James Handler , published in IEEE Computer January 2009 : ” Web 3.0 emerging ” on to explain that after countless laps around the Semantic Web technology has finally found that can help her accomplish – themselves.

The article explains the technology integrated into the emerging Semantic Web are already starting to produce results, from basic applications using RDF description ( in the description of resources, link data from multiple Web sites using a standard language SQL , the SPARQL query that RDF to connections that are ready in XML or OWL ontologies .

Far from being utopian , so the scenario of the Semantic Web is now real and results .

 

Big Data and Libraries

21 Aug

Technology Data Big Data is poised to revolutionize all aspects of human life and culture BigData3as people collect and analyze large volumes of data to predict behavior, problem solving, safety, and numerous other applications, is what ensures the site Christian Science Monitor.

The generation of large amounts of data is being driven by the increasing digitization of everyday activities and dependence on electronic devices of people who leave “fingerprints” concept that can be extended to trace “information”, since any object in any state conservation may contain “implicit” that is not yet in a suitable format.

The site CSMonitor cites a large data project which is a remarkable effort by the Library of Congress to archive millions of tweets per day, which can cost a lot of money for its historical value.

.
One example cited is the work of Richard Rothman, a professor at Johns Hopkins University in Baltimore, fundamental save lives.

The Centers for Disease Control and Prevention (CDC) in Atlanta predict flu outbreaks, and does so through the reports from hospitals.

But it took weeks, in 2009, appeared a study where researchers could predict outbreaks much faster through the analysis of millions of Web searches, queries made ​​as “My son is sick” and could learn a flu outbreak long before the CDC knew the reports of hospitals.

But the technologies of large volumes of data also has a boundary claim, in which technology is perceived potential distruir privacy, encourage inequality and promote government surveillance of citizens or others in the name of national security, how to reconcile these two trends ?

 

Google makes internet failure fall 40%

20 Aug

In last Friday between the hours of 20h37 and 20.48 (GMT), all Google services suffered an outage.: Gmail, Drive, Maps and clear the search, the company said in a statement that lasted “between one and five minutes .

Google said on your page that, during the period of interruption, “50% to 70% of requests Google received” error messages, but the service was corrected after four minutes and was restored to most users on a minute.

According to the company GoSquared there was a 40% drop in global internet traffic thatGoSquare night and studied showed that, after the fall of a few minutes, internet traffic soared logos after restoration.

During the fault, according to the Google page, “50% to 70% of requests Google received error messages”, but did not report the source of the failure.

The failure showed how the internet is still fragile and the possibility of even temporary damage is not a fallacy.

 

Altavista search engine pioneer is disabled

03 Jul

Next Monday (08/07) the Altavista search engineAltavistaUS pioneer with 18 years of existence is turned off, it has  mainpage in USA and has been very popular.

They were popular in the 90s also Lycos, Infoseek and Yahoo.

It was created when the Web was born and the Internet had in twenty in 1995, but changed owners several times, the creator Digital (DEC, Digital Equipment Corporation) was purchased by Compaq in 1998, when it was bought by HP changed hands.

HP sold vovo search for Oberture, which in the same year was bought by Yahoo in 2003.
With this the search giant’s empire grows stronger, though not lacking critical and semantics, relevance and volume of Google searches.