RSS
 

Arquivo para a ‘Framework’ Categoria

Holographic Transmission 2022 Football Cup

19 Aug

This is what Japan promises, already in 2015 they had bought this right without even knowing if the technology would be mature, with the 5G internet and the possibilities of drones that map the movements of each player in the field, let us remember that the idea Player movement has been possible for a long time, with games available every cup.

As the Qatar Cup is being questioned, Japan has already proposed to host saying that the 2002 World Cup, which by the way Brazil was Champion, has proved the infrastructure capacity and qualities of football stadiums, now launches as an alternative to Qatar .

Suminori Gokon, chief director of the Japanese World Cup committee, said it was time to return something like a return to the participating countries, as well as entertainment, with the possibility of a free point view that allows fans to navigate the field. Choosing the field of vision that I have from the game, I am already imagining a VAR with these resources, would reach the body of the player to observe the contact in a foul.

It proposes an augmented reality not yet known which is the possibility of not only capturing the lines by lip reading, they also translate the lines in several languages ​​(photo), of course the mouth in hand to speak will become a total craze on football fields.

In South Africa the president made an appearance using hologram technology to be present at two events at the same time, recently Portuguese TVI used the pivot journalism José Alberto Carvalho to make a Vodafone promotional call on the premises Vodafone Paredes de Coura, more than 400 km from the studio, due to the speed of 5G transmission, it is not possible to notice the delay in speech that he was making with the other caller at a distance.

Below the video where Japan purports to be the holographic broadcast promoter of the 2022 World Cup:

 

 

Playing with the mind: Claude Shannon

06 May

It was time to pay tribute at the time of Claude Shannon’s contribution and genius, two journalists wrote in 2017 a book about the one that, in our vision with Vannevar Bush of whom he was a MIT lecturer, created Information Theory, although the name might be different, he knew he used information like “machine.”

In the book the information will be called a message, but it could be a sign. I did not read the book, but it was studying the Boolean Algebra that Shannon started to make a digital world, but opposing the analog is the second misconception because everything we do is analog, the clock for example, there is an analogy with the movement of the sun and of course, the rotation of the earth.

In his master’s thesis, in 1937 he analyzed electric circuits that were based only on “yes” or “no”, with connectives “and” or “and” and “not”, and it will be only in his 1948 work that his “Mathematical Theory of Communication”, became Information Theory.

The basic idea was based on calculations of the amount of information needed for a message (I prefer a signal) so that information (of human origin, transmitted by a channel or physical medium) does not lose its original relevance.

Few books have so many citations in relevant works, the book cites 91,000, and virtually the entire theory of “signal processing,” “imaging,” and any information transformed into digital “signal” must use its theory.

The central contribution of his work is called “sampling theory”, which states that in any system the maximum sampling frequency must be at least twice the highest frequency of the signal to be transmitted.

For example, an audible sound to the human ear, which reaches the maximum at 20 KHz, should have a sampling rate of at least 40 KHz. In addition to anticipating the computer along with Alan Turing, also the first Machine Learning, quite simple, but sophisticated for that time, a mouse robot in a maze:

 

 

 

 

Even waves can go to AI

20 Feb

The OpenAI project, although it is said to be “non-profit” and is really open, just enter the blog of the project to check the progress and the possibilities, in fact even pointed the experts when publishing a system that writes news / texts, theoretically fictions, but that can be classified as fake, or as they are being called: faketexts.  Also the code is open and available on the GitHub site for developers. 
Natural language processing systems can perform tasks such as answering questions, machine conversion, reading comprehension, and text summarization, which are already typically addressed with supervised learning in task-specific data sets, but text search in GPT2 is wider in quantity.
GPT2, the successor to the GPT that was just a producer of texts from basic texts, can now read up to 40 GB of existing text on the Web, and what it has produced frightens a little, for clarity, depth and worst of all, pure fiction or more clearly: fakes.
Among its syntactic characteristics, it is superior to others of the sort, writing passages that make sense with the previous text and maintains the style, without getting lost in long sentences. The problem is that it can generate fakenews that can now be longer, becoming faketexts, a report from The Guardian shows the novelty and the problems:

 

Web Summit in Lisbon

08 Nov

One of the biggest events of the Web was this week, it was a side event, I could only follow videos and news, undoubtedly the biggest star was the founder of the Web Tim-Berners Lee who already has a great new project, although he has spoken between the lines.

He started an interview, which in fact he spoke at will without many questions saying the beginning of the Web and how his growth was also surprising for him, he told technical details like “I wrote the code of the first server and the code of the first browser, it was called WorldWideWeb.app “and was on info.cern.ch.

He then said that his concern is the same as everyone, after 25 years we should deal with: cyberbullying, misinformation, hate speech, privacy issues and said what many are talking about, “What the hell could go wrong?” to the public: “in the first 15 years … great things have happened. We had Wikipedia, the Khan Academy, blogs, we had cats, “he said jokingly, adding:” Connected Humanity should be more constructive, more peaceful, than Humanity disconnected”, but jnt (just not).

“Because we are almost at the point where half of the world will be online”, explained the British engineer was referring to the ’50 / 50′ moment, that is half the connected humanity expected in 50 years, but it should reach this point in May 2019.

After trying to argue the responsibilities of governments and companies, I believe they can happen but they will be slow, he spoke indirectly of his SOLID (Social Linked Data) project, stating that “as individuals we have to hold corporations and governments accountable for what is happening on the internet ” and “the idea is, from now on, everyone is responsible for making the Web a better place, “said encouraging start-ups too to get into this process.

Thinking about the development of interfaces where users know people from different cultures, but above all ensure the universality of the Web, according to Berners-Lee the main aspect should be (speaking indirectly again of SOLID) that the popular intervention at global level and that made the Web “just a platform, without attitude, that should be independent, can be used for any kind of information, any culture, any language, any hardware, software”, linked data may help this.

 Tim Berners-Lee presented the #ForTheWeb movement on the same day that his World Wide Web Foundation released the report “The Case for the Web”, the event had a superaudience, more than 30 thousand people, there are several videos, but the Opening Ceremony is one of the most outstanding and has Tim-Berners Lee as well, see on vídeo: https://www.youtube.com/watch?v=lkzNZKCxM

 

Wikipedia and Artificial Intelligence

24 Oct

Having already almost surpassed the point of singularity (see our post), the point that the machine would surpass human intelligence, the question now turns to consciousness and a well-considered point is the question of consciousness.
In this sense the main criticism is the perpetuation of prejudices, which would avoid what I call hermeneutics, but it is an incorrect view of the evolution of digital technology, for example, the use of Digital Ontologies and the ability to seek scientific studies outside of Wikipedia.
This is what recently announced an article in The Verge, and the most serious omission after researching scientists who are omitted from Wikipedia, was to note that 82% of written biographies are about men.
In a blog post, according to The Verge website, John Bohannon, director of science at Primer, explains the development of Quicksilver tool to read 500 million original documents, sift through the most cited numbers and then write a basic article about the work of these scientists not mentioned in Wikipedia.
Two examples of illustrious women found and written for AI are Teresa Woodruff, a scientist who designed mice ovaries using 3D printers, was cited by Time magazine in 2013 with one of the most influential people in the world scientist, and the other case is that of Jessica Wade, a physicist at Imperial College London, who wrote the new entry for Pineau.
Wade was one of the scientists who said “Wikipedia is incredibly tantalizing, and the underrepresentation of women in science is particularly bad,” and praised Quicksilver stating that with it you can quickly find large amounts of information very quickly.
Wikipedia will have to evolve with Machine Learning tool, this may happen in the coming years, the fact that there are specific tools for this does not invalidate Wikipedia, shows that it has weaknesses and should be corrected

 

Paul Allen died

16 Oct

Co-founder with Microsoft’s Bill Gates (photo), was fortunate enough and was in fact the great developer of Microsoft, Bill Gates had worked before Microsoft only in a version of Basic language, it was he who suggested the purchase of QDOS, developed system by Tim Paterson when he works at Seattle Computer Products, where MS DOS came from, whose sale to IBM is the origin of Microsoft’s millionaire project.

Paul Allen was familiar with Xerox’s Palo Alto MVT system, which was an inspiration for early versions of Windows, and later invested in Explorer in a heavily competitive version with Netscape, which triggered the so-called Web browser war.

Paul Gardner Allen created a foundation with his name in 1988 to run philanthropic projects; between 1990 and 2014 he donated more than $ 500 million to more than 1,500 nonprofit organizations, most of them for technology, arts and culture projects, but also a significant slice of social development (about $ 100 million).

He died in the 65 years old, cancer victim in his hometown of Seattle, where he owned the basketball team.

 

This is solid or liquid

08 Oct

It sounds like a joke, not the question that appears in the design of the Solid website, in fact the question there is: What is Solid? the new Internet project by Tim Berners-Lee and MIT.

After Web 2.0 that included everyone, but lacked validation of data, authorship and thoughts, Web 3.0 emerged from Linked Data in 2009, and this is in the composition of the name Solid: Social Linked Data, although the idea of the acronym is rejoined it circulated through the networks and makes all sense, the main idea is to decentralize the Web, give greater security giving users the possibilities of total control over the use of data, explains an article by Klint Finley in the prestigious magazine Wired .
The main idea is to give individual users full control over the use of their data, but with validation, authorship and data processing through the concept of linked data.
The main startup of this project is Inrupt, according to Wired magazine: “If everything goes as planned, Inrupt will be for Solid what Netscape was for the network beginners (Web): an easy way to enter, the magazine was invited to learn about the project at Berners-Lee’s office, which revealed several concerns.
Despite all the good we have achieved, the cycle of inequality and division, captured by “powerful forces that use for their own interests,” Berners-Lee said, adding: “I have always believed that the network is for everyone. That is why I and others fought hard to protect it “and now a decisive step has been taken.
The Inrupt screen will bring together functions like Whatsapp, google Drive, Spotify and Google Drive, it seems all the same, the difference is that the control will be personal, the individual will define their priorities and strategies and not algorithms of social networks.
It’s also an emerging need because you just have to look at the screen of your cell phone or the computer, personally install few things, and we see a multitude of applications that we do not even use, it’s like a wardrobe full of old clothes waiting for an occasion that does not come.
The SOLID Project is here to stay, even if it’s a newbie and lots of it’s just a promise, it’s easy to see its viability, necessity and potentiality through MIT’s seal.

 

Deep Mind Advanced Project

20 Sep

Projects that attempted to simulate brain synapses, communication between neurons, were formerly called neural or neural networks, and had a large development and applications.
Gradually these projects were moving to studies of the mind and the code was being directed to Machine Learning that now using neural networks happened to be called deep learning, an advanced project is Google Brain.
Basically it is a system for the creation and training of neural networks that detect and decipher patterns and correlations in applied systems, although analogous, only imitate the way humans learn and reason about certain patterns.
Deep Learning is a branch of Machine Learning that operates a set of algorithms used to model data in a deep graph (complex networks) with several layers of processing, and that, unlike the training of neural networks, operate with both linear and non-linear patterns .
One platform that works with this concept is Tensor Flow, originated from an earlier project called DistBelief, is now an open source system, released by the Apache 2.0 team in November 2015, Google Brain uses this platform.
In May 2016, Google announced to this system the TPU (Tensor Processing Unit), a programmable artificial intelligence program accelerator with high transfer rate ability for low precision arithmetic (8 bts), which runs models and does not more training as neural networks did, a Deep Compute Engine stage begins.
The second step of this process in Google Compute Engine, the second generation of TPUs achieves up to 180 teraflops (10 ^ 12 floating point operations), and mounted in clusters of 64 TPUs, work up to 11.5 petaflops. 

 

Revolutionary method for videos

19 Sep

Researchers at Carnegie Mellon University have developed a method that without human intervention modifies a video content from one style to another. The method is based on a data processing known as Recycle-GAN that can transform large amounts of video making them useful for movies or documentaries.

The new system can be used for example to color films originally in black and white, some already made like the one shown in the video below, but the techniques were expensive and needed a lot of human effort during working hours.

The process arose from experiences in virtual reality, which in addition to the attempts to create “deep falsities” (altering objects or distorting contents, could appear a person inserted in an image, without it was allowed, in everyday scenes almost always happens this and much people do not accept.

“I think there are a lot of stories to tell,” said Aayush Bansal, a Ph.D. student at the CMU Robotics Institute, who said of a film production that was the main motivation to help design the method, he said, allowing that the films were produced faster and cheaper, and added: “it is a tool for the artist that gives them an initial model that they can improve,” according to the CMU website.

More information on method and videos can be found at Recycle-Gan website.

 

 

Qubits Storage Preset

26 Jul

The quantum bit or qubits, is the bit unit for quantum storage made in the latest technology through photons.

Silicon chips, which come from old transistor technology that oscillates between two states of electric charge, 0 or 5 Volts, only have the two states, but the quantum chips may have the “1” and “0”, a third state in which they are “interlaced” and this gives the possibility of storing more than two bits in a photon result in 3 qubits.

Using this physical fact of quantum physics, scientists at the University of Science and Technology of China have managed to store 18 qubits in only 6 interlaced photons, which is a record.

But not everything should be celebrated, while the two simple states of the qubits are processing at practically light speeds, the interlacings take a few seconds, which according to Sydney Chreppler, a quantum physics researcher at the University of Berkeley (USA), is an “eternity” gor the processing of quantum bits.

The possibility of quantum storage will not only give a jump in the storage of computers, but also in their speeds, since the speed of light photons, even in a physical medium that reduces the speed, optical fiber or corresponding circuits, which will change the computing power making current storage possible in Zettabits (10 ^ 21).

The article was published in the scientific journal Physical Review Letters, is also available for reading on arXiv. [LiveScience]