Arquivo para a ‘circuits’ Categoria
Christmas gift
In December 1947, right around Christmas, three Bell Telephone engineers (William Shockley, John Bardeen and Walter Brattain) invented the transistor (TRANsfer reSISTOR) that revolutionary communication and later computing, also ICs (Integrated Circuits) have as a base the transitor.
The big event this Christmas will be the launch of the James Webb Space Observatory (jwst, James Webb Space Telescope) from the Kourou space base, in French Guiana, at 9:20 am EDT, which will be taken to the Ariane 5 spacecraft. made at the cost of $10 billion, the most expensive space project in history.
What are we going to look for in space, in addition to other puzzles, the origin of the universe in its early stage, and it’s curious why this happens this Christmas, and why it is so special.
Lights decorate cities even in attention with a possible new wave of the pandemic and also a new flu, they light up hope and make us look to the future and to the beginning of our lives, our planet and humanity.
There is no way to escape from utopians, cosmogonies and religious eschatologies, even if we can solve many enigmas, the philosophical question remains: why there is everything and not nothing.
The answer can only be ontological, there is a reason for being and there is a promising future, where there is hope if we return to the logic of Being, of dialogue and of the encounter with the Other.
The message cannot be any other than that which speaks of brotherhood and peace, both not yet fully achieved by humanity, Christmas reopens this hope.
This is the prophecy that Isaiah announces (Is 52,7 when there will be true peace: “how beautiful they are, walking on the mountains, the feet of him who proclaim and preach peace, of him who proclaims good and preaches salvation, and says to Zion, ‘Reign thy God,’” the message that believed in the coming of the Savior, the birth among men of the Messiah.
The lights in the city not only announce this light visible to the eyes, but those that open the heart to friendship and peace (Jn 1:4-5): “in her was life, and life was the light of men. And the light shines in the darkness, and the darkness could not overcome it”, because a tiny point of light already dissipates the darkness, and this is our hope.
May Christmas renew hopes for a possible and sustainable future for humanity, and may justice and peace reign in a new world.
Deep Mind Advanced Project
Projects that attempted to simulate brain synapses, communication between neurons, were formerly called neural or neural networks, and had a large development and applications.
Gradually these projects were moving to studies of the mind and the code was being directed to Machine Learning that now using neural networks happened to be called deep learning, an advanced project is Google Brain.
Basically it is a system for the creation and training of neural networks that detect and decipher patterns and correlations in applied systems, although analogous, only imitate the way humans learn and reason about certain patterns.
Deep Learning is a branch of Machine Learning that operates a set of algorithms used to model data in a deep graph (complex networks) with several layers of processing, and that, unlike the training of neural networks, operate with both linear and non-linear patterns .
One platform that works with this concept is Tensor Flow, originated from an earlier project called DistBelief, is now an open source system, released by the Apache 2.0 team in November 2015, Google Brain uses this platform.
In May 2016, Google announced to this system the TPU (Tensor Processing Unit), a programmable artificial intelligence program accelerator with high transfer rate ability for low precision arithmetic (8 bts), which runs models and does not more training as neural networks did, a Deep Compute Engine stage begins.
The second step of this process in Google Compute Engine, the second generation of TPUs achieves up to 180 teraflops (10 ^ 12 floating point operations), and mounted in clusters of 64 TPUs, work up to 11.5 petaflops.
Material can help quantum chips
Researchers at the University of Central Florida (UCF) have discovered a type of material that could be used as a “building block” of quantum chips, consisting of hafnium, tellurium and phosphorus, Hf2Te2P.
According to UFC researcher Madabe Neupane, “Our discovery takes us one step closer to the application of quantum materials and helps us gain a deeper understanding of the interactions between various quantum phases.”
The material has more than one electron pattern that develops within its electronic structure, giving it a range of quantum properties. Neupane says that this material will increase computing power for large volumes of data on new devices and will considerably reduce the amount of power needed for power electronics.
The discovery has already attracted companies that are investing in research, Microsoft for example invested in its project called Station Q, the laboratory that is dedicated to the field of topological quantum computing, and Google has teamed up with NASA in an investment that works with quantum computing and artificial intelligence.
Because quantum phenomena need to be better understood so that electronics are totally replaced by photonics and quantum computation, computational scenario changes tend to change rapidly and continuously.
The discovery of Neupane’s lab is published in Nature Communications, and is a big step forward for this change of scenario.
Personal assistants arrive at the office
In some doctors’ offices already use Google Home, Assistant and Translate, in addition to the indispensable Agend, whoever starts using it does not leave it any more, it avoids scheduling conflicts and warns forgetfulness, but the idea now is to integrate these environments into “Medical Digital Assit “, developed by the doctor Steven Lin of Stanford University made next to the CNBC.
According to CNBC site, the project is in the health group of the daring Google Brain project, part of Google’s division in artificial intelligence, having as its “ambitious goal” to deploy external health care trials before the end of 2018.
The main goal, however, is to assist physicians in their reports and medicals records, before beginning the studies the Stanford School of Medicine made a survey where they found that doctors lose 6 to 11 hours of their daily work to document the histories patients’ clinics, so it is often easier questions, but patient responses may be inaccurate or ignore relevant data.
The problem of accuracy is key, the CNBC website explains the difference between an interpretation and “hipo” or “hyper” can be fatal, hypoglycemia is exactly the opposite of hypoglycemia if the doctor does not check this carefully.
The first phase of this study is expected to conclude in August, Lin said both parties plan to renew collaboration for the second phase for at least a year.
Microsoft and Amazon are also reportedly developing systems similar to artificial intelligence, and the main focus remains on developing clinical reports
Intel Hardware Insecurity
The storage of data had 3 levels: the external memory (HDs), the memory of the computer (the RAMs) and the many internal ones before called Register and today is the Kernel Memory, are in the kernel of the computer and are the faster, but can also be windows for data theft, today there is a fourth level which is the external stores in clouds, computing center scattered around the world selling these stores.
An error in the production of Intel’s chips, which make up almost 90 percent of the world’s computer chips (smartphones are very different), has just been caught in a design flaw that gives data vulnerability.
AMD, a competitor of Intel, took the opportunity to point out that its Kernel memory (computer core memories) are unaffected by hacker attacks, and does not allow access to passwords and other sensitive machine data, through which data from a computer can be stolen.
According to Paul Kocher, president of security company Rambus for the New York Times, the problem could be bigger if access was in the clouds, where large parts of the data are already being stored today, this is because the sharing of machines this sharing of core memories) can be made, even if considering the security protocol that avoids access to other memory levels.
Security issues with the giants Amazon, Microsoft and Google, in addition to the chipmaker Intel may rock the market, in addition to AMD other Eastern competitors should be on the lookout, we alerted to the key problem.
Autonomous robots?
Autonomous robots are a denomination for those who are within the environmental limits, can achieve the desired goals (by humans or by tasks organized in an algorithm) in these unstructured environments without a human help, by this they are in certain levels.
For example, within a factory where mechanical tasks are performed, to avoid accidents, their geographic space is limited and deficient to detect the defect that can be fulfilled by a forged task, since a space robot should have fewer limits and be the most autonomous autonomous, for being without possibility of direct human action and having communication difficulties due to a distance.
The project called SWARM, funded by the European Union and we have already made a post, now has the first multi-robot system of autonomous assembly that has sensory-motor coordination observing similar robots around them, they will vary in shape and height in white according to a task and / or work environment.
A central “cerebral” central coordinating system, all of us, through a system called MNS (Mergeable Nervous System), and thus are reconfigured observing different capacities but combined by a single central controller.
They can also split up and perform self-repair tasks, eliminating defective body parts, including a brain unit with some defect, of course, one can define which are defeats and self-repairs.
In autonomous robots, learning and strategy according to the environment, what you can do with your autonomy increases, but for what you can print, the article is still not the case.
The current model has 10 units, and the authors point out without paper published in Nature Communications, claim that the Project is scalable, both in terms of computational resources for robotic control and time of reaction to stimulus, whithin the system.
How are cyber-brain searches?
There is a lot of research mapping the brains, and investigating aspects of how certain functions work such as: motor function, vision, and in a very special way the
Cognitive Neuroscience studies the cognitive ability (knowledge) of a person, such as reasoning, memory and learning.
A recent study by the Universities of Exeter and Oxford of the United Kingdom, in conjunction with the University of Munster in Germany, has developed photonic microchips that immitate the synapses of the human brain using light and no longer electricity, as in other chips.
The chips is manufactured using the frequency phenomenon, but from the combined phase shift in the integrated photonic circuits designed for this, with synapses can operate at 1,000 times the speed of humans, which does not mean they do the same.
The researchers say that it is a fundamental step in machines capable of functioning and in the way of thinking in a way similar to the brain since the photon is fast and low energy consumption.
David Wright told the University of Exeter website that the project addresses two important issues in electronic computing, both speed and efficiency and capacity problems in parallel processing, the fastest of now: ” not only by developing…new brain-like computer architectures, but also by working in the optical domain to leverage the huge speed and power advantages of the upcoming silicon photonics revolution “
On-chip photonic synapse by Zengguang Cheng, Carlos Rios, Wolfram Pernice, C David Wright and Harish Bhaskaran is published in Science Advances.
What they say of the iPhone 8 and plus
I do not know if they have said anything before, but I was able to read the first comments yesterday from Hi Phone 8 and the plus model, Mathew Panzarino from TechCrunch highlights the camera “with augmented reality and computer vision emerging as competitors in the next big wave of development platforms, the camera system will be an [important] input mechanism, a communication system and a declaration of intent. ”
Another important technology site is Engadget, Chris Velazco wondered how ARKit-like apps would work and enjoyed the augmented reality experience and said that rendering virtual objects on physical planes made them “stick to surfaces better than similar Tango apps “.
Another strong site in the area is The Verge, Nilay Patel’s comment was: “Just like on Samsung, the iPhone’s images are now more saturated by default, although Apple says it’s still aiming for realism instead of saturated colors and smoothing the S8 “and said later that taking pictures with an iPhone 8, a Pixel XL, an S8 and an iPhone 7” in the automatic, and the iPhone 8 produced the most consistent and rich images of the group.
The novelty in the software was due to the feature Portrait Lighting, which allows light effects with the front camera, the battery lasts about 11 hours warns another review,
Lastly, but the most important tech site David Pierce of Wired said that “the phones are very good and impressive, and yet they are not the best Apple devices. The iPhone X represents the vision of the future of Apple, as well as Samsung, Essential, Huawei and many others. ”
It is expected much more and more from cameras and graphics treatment Apps, performance and memory seem to be important, but are getting in the background, TechCrunch site for example notes that “the A11 chip from Apple has a performance that is compatible with the Core i5 of MacBook Pro. ”
With the importance of graphics and image processing OLED screens of higher definition will be importante.
Unknown Stories of Computing
Charles Babbage built two machines called Analytical Engine and Diferential Engine, these machines, their systematizations and thoughts would not have arrived until we were not working patiently Ada de Lovelace (1815-1852), daughter of Lord Byron who compiled and organized the work of this Pioneer, making it understandable to mathematicians of the time.
Later David Hilbert (1862-1943) listed 23 mathematical problems at the time without solutions, one of which was to organize an algebraic system in order to solve the problem computability problem by algorithms, Kurt Gödel thinking about this problem creates a paradox about Completeness of systems, stating that it can not prove having proof by an assertion within the system, then consistency problems weaken such systems.
Thus it was necessary that logic, besides being constructed with good properties, had consistency (no contradictions), completeness (any proposition would be either truth or false exclusively) and the systems were decidable (existence of a method allowing to establish if any formula whether the formula was true or false).
This latter property was called by Hilbert as the “entcheidungsproblem”, or problem of “decision”.
Alan Turing and Claude Shannon working on coding machines (for US government messages) and decoding (a machine called Enigma was captured from Hitler’s army), as both projects were secret, found in meals and work breaks as indicated The book by James Gleick and talk about the problem proposed by Hilbert and not solved by Gödel, a secret document proves this passage of Turing, who was English, by Bell Laboratories, where he worked on deciphering the Enigma machine code.
Shannon at that time worked as a monitor at MIT in Vannevar Bush’s laboratory, who had proposed a “read” machine called MEMEX (it appeared in TIME magazine) was not a computer itself, but a machine to cross information from books.
Vannevar Bush suggested to Claude Shannon Boole’s Algebra..
Later using the model of the mathematician Alonzo Church that finalized the design of Alain Turing, and the call Turing Machine is actually based on Turing / Church model.
Norbert Wiener’s model were electronic models of feedback machines, although he founded Cybernetics, the idea was to create models for movements and turn them into problem-solving models, they were contemporary with Vannevar Bush of MIT
Chip revolutionizes IoT
A company almost bankrupt decides to invest in the development of a chip for the Internet of Things (IoT) not only saves the company itself but promises to revolutionize the market.
The chip is the ESP8266, from Espressif company, the price $ 5 (5 collars), less than a coin and integrated in several solutions, for example, communication with the serial interface of most computer models, the UART Universal asynchronous transmission), this means transmission with which devices with TCP Internet interface.
It is a System-On-Chip with built-in Wi-Fi, It has GPIO connectors, I2C buses, SPI, UART, ADC input, PWM output and Internal temperature sensor, CPU operating at 80MHz, capable of operating at 160MHz, 32-bit RISC architecture, 32KBytes of RAM for instructions, 96KB of RAM for data, 64KB of ROM for booting, SP SP memory and Winbond W25Q40BVNIG of 512KBytes .
To program them, the company has a repository in GitHub, where it provides code samples for RTOS firmware and AT commands, and its SDK, for example, in addition there is an ESP8266 developer forum, maintained by Espressif, where it is possible Find a wide range of materials.
Will we have new developers of garages around the world?