Quantcast
Channel: Smart2.0
Viewing all 7377 articles
Browse latest View live

300,000-pixel backlit CMOS image sensor for smart applications

$
0
0
CMOS imaging solutions provider SmartSens Technology (Shanghai, China) has launched what it says is the world's first 300,000-pixel backside illumination (BSI) global shutter CMOS image sensor.

The SC031GS is offered as the world's first commercial-grade 300,000-pixel global shutter CMOS image sensor based on BSI pixel technology. According to the company, the device features a superior signal-to-noise ratio, higher sensitivity, and greater dynamic range than similar products, and suits a wide range of commercial products, including smart barcode readers, drones, smart modules, and other image recognition-based artificial intelligence (AI) applications, such as facial recognition and gesture control.

The SC031GS, says the company, provides a wider range, higher speed, and more accurate recognition technology for smart barcode readers than other similar products, adapting to the rapid growth of Internet of Things (IoT) and complex situations in real life. Unmanned aerial vehicles (UAVs) can also benefit from its high response speed, wide dynamic range, and low power consumption.

Based on BSI pixel design technology, using 3.75um large pixels (1/6" optical size), the SC031GS is said to deliver excellent imaging quality in low illumination environments. In addition, its SmartSens single-frame HDR technology, combined with a global shutter, ensures accurate acquisition of image information in complex applications (motion) and lighting scenes.

Compared with conventional CMOS image sensors using multiple-exposure HDR technology, the company says, its SmartSens' single-frame HDR global shutter technology is more suitable for image recognition-based AI applications.

"SmartSens is not only a new force in the global CMOS image sensor market, but also a company that commits to designing and developing products that meet the market needs and reflect industry trends," says Leo Bai, General Manager of SmartSens' AI Image Sensors Division. "We partnered with key players in the AI field to integrate AI functions into the product design. SC031GS is such a revolutionary product that is powered by our leading Global Shutter CMOS image sensing technology and designed for trending AI applications."

The SC031GS is currently in mass production.

SmartSens Technology


AI material prediction platform launched

$
0
0
Applied materials science and nanotechnology company Lumiant Corporation (Calgary, Alberta, Canada) has launched what it says is the world's first artificial intelligence (AI) material prediction platform for materials discovery.

The platform, called Xaedra, was created to address today's materials discovery process, which, says the company, is tedious, expensive, and slow. Current approaches rely on sophisticated trial and error and some level of simulation, resulting in only a small number of all known compounds having been characterized, says the company, while most materials remain "undiscovered" for potential uses.

Xaedra is offered as the first AI platform to successfully predict – not simulate – a wide variety of properties for over 50,000 known compounds.

“Xaedra first creates atomistic "fingerprints" of the 50,000 known compounds in its database," says Dr. Pawel Pisarski, the creator of Xaedra. "The user then can define a property of interest – mechanical, chemical, thermodynamic, electrical or other. Like any machine learning system, some level of measured data must be loaded into the database, then using the "fingerprints" the neural network is trained and Xaedra makes a prediction of the property for the remaining compounds."

Charlie Baker P.E., leader of Xaedra Business Development adds, "We believe that Xaedra will both enable and disrupt a wide range of technical fields, including energy storage, electronics, solar cells, catalysts, lightweight structures, and many others. Prospecting for breakthrough materials can be done at the computer, and lab resources redeployed to confirmation and validation. We think this can both level the playing field between entrepreneurs and established companies as well as spark a wave of innovation in material science."

A limited number of Beta user opportunities are currently available.

Lumiant Corporation

Related articles:
Open materials database advances battery design
New cathode material could triple lithium battery capacity
Material found that harnesses multiple forms of energy simultaneously
Nanostructures of "impossible" materials open up perspectives for optoelectronics
New material class promises zero-ohm conductivity
New materials promise thinner, more energy efficient circuits

Rutronik expands portfolio with new input buck controllers from Vishay

$
0
0
With the SiC46X family, Vishay introduces a range of new high performance synchronous buck controllers with high input voltage and integrated high-side and low-side power MOSFETs. Their power output stage can deliver high continuous current at a switching frequency of up to 2MHz.

The controllers generate an adjustable output voltage of at least 0.8V at input voltages between 4.5V and 60V. This makes them suitable for a wide range of applications, including computers, consumer electronics, telecommunications and industry. In the normal operating temperature range of -40°C to +125°C they guarantee an output voltage accuracy of +/-1%.

The SiC46X architecture offers ultra-fast transmission with minimal output capacitance and narrow ripple regulation at very low load. The device is compatible with any capacitor and no external ESR network is required for loop stability. An energy-saving concept significantly increases efficiency at low load.

Controller protection mechanisms include overcurrent protection, output overvoltage protection, short-circuit protection, output undervoltage protection and thermal shutdown. They also have UVLO for the input rail and a programmable soft start.

Developers can choose between three operating modes: forced continuous line, energy saving or ultrasound. With an adjustable switching frequency between 100kHz and 2 MHz and an adjustable current limit, the SiC46X family is highly configurable.

The SiC46X family is available in a 2A, 4A, 6A, 10A pin-compatible 5mmx5mm (Pb)-free MLP55-27L package at www.Rutronik24.com. To support the design process, Vishay offers PowerCAD Online Simulation (https://vishay.transim.com ).

Further information: Rutronik24.com

Machine Learning: New method enables accurate extrapolation

$
0
0
Scientists have developed a new machine learning method that can make robots safer. Their approach uses simpler and more intuitive models of physical situations.

To ensure the safe operation of a robot, it is crucial to know how the robot reacts under different conditions. But how do you know what disturbs a robot without actually damaging it? The machine learning method developed by scientists from the Institute of Science and Technology Austria (IST Austria) and the Max Planck Institute for Intelligent Systems uses observations made under safe conditions to make accurate predictions for all possible conditions determined by the same physical dynamics. The method is specially developed for real situations and offers simple, interpretable descriptions of the underlying physics.

Traditionally, machine learning can only interpolate data - that is, make predictions about a situation that lies "between" other, known situations. Machine learning could not extrapolate - that is, it could not make predictions about situations outside the known situations, since it only learns to model known data locally as accurately as possible. Collecting enough data for effective interpolation is also time and resource intensive and requires data from extreme or dangerous situations. Georg Martius, former postdoc at IST Austria and group leader at the MPI for Intelligent Systems in Tübingen, Subham S. Sahoo, a PhD student at the MPI for Intelligent Systems, and Christoph Lampert, professor at IST Austria, have now developed a machine learning method that addresses these problems. This method can for the first time perform precise extrapolations for unknown situations.

The special thing about the new method is that it tries to find out the true dynamics of the situation: Based on the data, it draws conclusions and computes equations describing the underlying physics. "If you know these equations," says Georg Martius, "you can say what will happen in all situations, even if you have not seen them." This is what enables the method to extrapolate reliably and makes it unique among machine learning methods.

Next: What makes this method so unique

The method is unique in many ways. First, the solutions previously created by machine learning were far too complex for human beings to understand. The equations resulting from the new method are much simpler: "The equations of our method are something you would see in a textbook - simple and intuitive," says Lampert. The latter is another advantage: other machine learning methods do not give any insight into the connection between inputs and results - and thus no insight into whether the model is plausible at all. "In all other research areas we expect models that make sense physically and tell us why," adds Lampert. "We should expect that from machine learning and that is what our method offers." Therefore, the team based its learning method on a simpler architecture than usual methods to ensure interpretability and to optimize it for physical situations. In practice, this means that less data is needed to achieve the same or even better results.

And it's not all theory: "In my group we're working on developing a robot that uses this kind of learning. In the future, the robot would experiment with different movements and then be able to find the equations that describe its body and its movement so that it can avoid dangerous actions or situations," says Martius. While the main focus of research is on robotic applications, the method can be used with any type of data - from biological systems to X-ray transfer energies. It can also be integrated into larger machine learning networks.

The researchers present their findings at this year's International Conference for Machine Learning (ICML) that currently (July 9 through July 15, 2018) takes place in Stockholm.

Startup receives $8.4 million in funding for "self-healing software"

$
0
0
Aurora Labs (Munich and Tel Aviv), developer of a predictive maintenance solution for automotive software, receives $8.4 million in a first funding round. Financing is provided by Fraser McCombs Capitals and MizMaa Ventures, which has previously invested in Aurora Labs. Aurora will use the funds to expand its global market presence and drive forward its research and development activities.

In today’s cars more and more functions are implemented in software; most innovations in vehicle construction come from software. This confronts car manufacturers with ever shorter development cycles and frequent and unpredictable software problems. This, in turn, results in higher recall rates. In 2017, for example, 15 million vehicles were recalled due to software errors, costing billions of dollars. The predicted increase in the amount of software code in vehicles will reinforce this trend.

Aurora Labs offers a predictive maintenance solution for connected cars and autonomous vehicles. Its machine learning algorithms address all three stages of vehicle maintenance: The platform detects errors in software behavior and predicts downtimes, and it corrects errors in the ECU software. In addition, the clientless Over-the-Air (OTA) update solution from Aurora Labs offers fast ECU updates without downtime and without the double storage requirements otherwise associated with software updates over-the-air.

"The number of lines of code in vehicles is already around 150 million and is expected to increase further," argues Aurora Labs CEO and co-founder Zohar Fox. "On average there are about 15 to 50 errors per thousand lines of code, 15 percent of which are overlooked by quality assurance. This highlights the need for solutions that can predict downtime before it leads to security problems."

To date, Aurora Labs has three pilot projects underway with major automotive OEMs. Further projects are planned for the coming months. Founded in 2016 by Zohar Fox and Ori Lederman, the company has offices in Tel Aviv and Munich. Aurora Labs has developed a process called Line of Code Maintenance. Using machine learning, it can detect errors in the embedded software of the vehicles and repair them using OTA. 

More information: https://www.auroralabs.com/

Proximity and gesture recognition for interaction with display GUI

$
0
0
With the new E909.21 controller ICs and the E909.22 conditioner, chip manufacturer Elmos (Dortmund, Germany) presents a solution for optical proximity and gesture recognition in cars. The two chips were developed for use in larger automotive displays.

The combination of controller and conditioner provides the user with a coordinated solution for precise interaction with the user interface (GUI). Solutions built with the E909.21 and E909.22 recognize the following actions: Approach, wipe, air slider, magnification and more. The object detection and motion evaluation functions in real time on the basis of a simple infrared technology.

Elmos claims to be the world's No. 1 in vehicle gesture recognition applications. One reason for this is the best ambient immunity available on the market and automatic system calibration. In addition, numerous functions have been integrated to simplify system design and sensor design: The modules are therefore a plug & play solution adapted to the application. The E909.21 is used in series production by well-known OEMs in various vehicle models.

The ICs are based on the established HALIOS principle. It is the only method that works with true optical compensation. This enables the almost complete neutralization of parasitic physical effects from the receiver, starting with the ambient light effect up to very good temperature stability, far beyond the possibilities of the individual components.

The modules E909.21 and E909.22 each have two receiver and four LED transmitter channels as well as a special HALIOS compensation path. For each transmit channel 100mA current drivers are integrated on the IC. The HALIOS switching frequency can be set up to 1MHz to eliminate interference with other optical systems. The integrated 16bit Harvard Architecture CPU can be clocked at 4, 8, 12 or 24MHz. In addition, 32kByte Flash, 4kByte SRAM and 8kByte SysROM are integrated in the IC. The temperature range extends from -40°C to +85°C. The E909.21 is supplied in the QFN32L5 housing. The E909.22 is available in a QFN20L4 package.

On request, the device is also available with an integrated bootloader, which allows the device to be programmed via one of its two serial interfaces (I²C / SPI). In addition, the developer is supported by firmware demo code when creating the initialization and calibration routines. Numerous application notes and a gesture library complete the development environment.

Information on possible applications and technical details: https://www.elmos.com/fileadmin/2013/06_publications/elmos-optical-ir-sensors_halios.pdf

 

Power converter supports both IoT, wearable devices

$
0
0
Japanese power chip maker ABLIC has launched a range of high efficiency step-down switching regulator with a supply voltage divided output for IoT and wearable devices.

The S-85S0P Series features a supply voltage divided output and a current consumption of 260 nA in a single chip. This has the supply voltage divided into VIN/2 or VIN/3 for the output for a low voltage microcontroller and analogue to digital directly or where a microcontroller monitors a battery voltage.

This is the the only chip in the industry with a supply voltage divided output says ABLIC, which reduces battery drain for IoT and wearable devices. 

It uses a compact SNT-8A (2.46 x 1.97 x 0.5 mm) package

Also part of the lineup are the S-1740 and S-1741 Series that provide supply voltage divided output and ultra-low current consumption LDO regulators in a single chip.

www.ablicinc.com/en/

LDO regulator targets wireless IoT sensors

$
0
0
Semtech has launched a low drop out (LDO) regulator aimed at sensors for the Internet of Things (IoT) that use its LoRa long range wireless technology.

A consistent voltage output with low noise (100μVRMS) is necessary for low-power radio devices, such as LoRa-based sensors, to function without noise interference with radio information transmission. The nanoSmart SC573 has a low quiescent current of 50μA that extends the operating life for battery-powered IoT sensors up to 10 years but also provides and output noise of 100μVRMS /V.

It has an internal 100Ω output discharge and a dropout at 300mA load of 180mV and so is aimed at developers designing solutions for industrial and consumer applications including smart metering and smart buildings.

“The nanoSmart LDO is essential for low power devices that need continuous supply voltage,” said Francois Ricodeau, Senior Product Line Manager for Semtech’s Wireless and Sensing Products Group. “The combined functionality of low noise and energy savings makes nanoSmart the regulator of choice for LoRa-based applications, and consumer and industrial electronics.”

The nanoSmart LDO is currently available in 2 voltages (3.3V and 1.8V) and is priced at $0.13 in volumes of 10,000 units.

www.semtech.com


Hyundai invests in U.S. solid-state battery startup

$
0
0
In a further positioning for future solid state battery technology, the venture arm of car maker Hyundai Motors has invested in startup Ionic Materials (Woburn, MA).

The investment by Hyundai CRADLE, Hyundai's corporate venturing business, gives early access to the technology for the South Korean car maker. alongside the Nissan/Renault/Mitsibushi Alliance.

Massachusetts-based Ionic Materials has so far raised $65m to commercialise its patented solid polymer material that enables solid-state batteries that are inherently safe, high in energy density and operational at room temperature with little or no caobalt in the cathodes. This is important as the availability of cobalt is identified as one of the risks for volume production of such batteries.

Volkswagen has invested in QuantumScape, Honda is working with General Motors, Suzuki is working wth Toshiba, Toyota is working with Panasonic and BMW is working with Solid Power. 

"Ionic Materials' breakthrough technology could significantly improve battery technology today," said John Suh, vice president of Hyundai CRADLE. "We are always looking for ways to ensure our cars provide the highest level of clean and efficient solutions. Our investment in Ionic Materials will keep us at the forefront of battery development, allowing us to build better eco-friendly vehicles."

"The investment by Hyundai represents another key company milestone and demonstrates our rapid momentum as we develop polymer-based materials for solid-state batteries," said Mike Zimmerman, founder and CEO of Ionic Materials. "With the ongoing help of our investment partners, we have expanded our facilities and are adding to our team to meet the ever-growing demand for this technology."

The company is also looking at using the material with other battery chemistries, including lithium metal, lithium sulfur and inexpensive and low-cost rechargeable alkaline batteries.

ionicmaterials.com

Related stories:
COBALT SHORTAGES TO HIT BATTERY PRICES
VOLKSWAGEN BETS $100M ON SOLID STATE BATTERIES
HONDA TEAMS WITH GM FOR LITHIUM BATTERIES
ILIKA LEADS ON KEY UK FAST CHARGING SOLID STATE BATTERY PROJECT

Researchers boost near-infrared OLED efficiency

$
0
0
In a paper titled "Exploiting singlet fission in organic light-emitting diodes" published in the journal of Advanced Materials, researchers from Kyushu University demonstrate how they can boost near-infrared OLED efficiency.

Harvesting of both triplets and singlets yields electroluminescence quantum efficiencies of nearly 100% in OLEDs, but the production efficiency of excitons that can undergo radiative decay is theoretically limited to 100% of the electron–hole pairs, the paper explains.

The researchers broke this limit by exploiting singlet fission to produce triplets excitons in a rubrene host matrix, emitted as near‐infrared (NIR) electroluminescence by erbium(III) tris(8‐hydroxyquinoline) (ErQ3) after excitonic energy transfer from the “dark” triplet state of rubrene to an “emissive” state of ErQ3.

This singlets to triplets excitons conversion lead to NIR electroluminescence with an overall exciton production efficiency of 100.8%, reports the paper, which indicates new strategies to develop high‐intensity NIR light sources


The singlet fission process used to boost the number of excitons in an OLED breaks the 100 percent limit for exciton production efficiency. The emitting layer consists of a mixture of rubrene molecules, which are responsible for singlet fission, and ErQ3 molecules, which produce the emission. A singlet exciton, which is created when a positive charge and a negative charge combine on a rubrene molecule, can transfer half of its energy to a second rubrene molecule through the process of singlet fission, resulting in two triplet excitons. The triplet excitons then transfer to ErQ3 molecules, and the exciton energy is released as near-infrared emission by ErQ3.
Credit: William J. Potscavage Jr.

Although overall efficiency is still relatively low, the new method offers a way to increase efficiency and intensity without changing the emitter molecule, and the researchers are also looking into improving the emitter molecules themselves.

With further improvements, the researchers hope to get the exciton production efficiency up to 125%, which would be the next limit since electrical operation naturally leads to 25% singlets and 75% triplets. After that, they are considering ideas to convert triplets into singlets and possibly reach a quantum efficiency of 200%.

Kyushu University - www.kyushu-u.ac.jp

Kirigami fabrication method creates complex 3D nanoscale structures

$
0
0
Getting their inspiration from traditional Japanese paper-cutting and folding techniques known as kirigami, researchers from the Massachusetts Institute of Technology have demonstrated a simple one-step fabrication method that could prove really versatile to design complex 3D structures at the nanoscale.

Described in the Science Advances journal in a paper titled "Nano-kirigami with giant optical chirality", the technique relies solely on the ion beam irradiation of a 80nm-thick free-standing gold film.

Instead of relying on stimuli like temperature changes, volume variations, or capillary forces to exert differential strains in cut-out geometries and bow them out of plane, the researchers entirely used gallium-based Focused Ion Beam (FIB) to not only cut out intricate shapes (at high intensity milling), but also to create controlled and localized tensile stress through lower intensity irradiation.

Upon ion irradiation, some of the gold atoms are sputtered away from the surface and the resulting vacancies cause grain coalescence which induces tensile stress close to the film surface, they explain in the paper.

Simultaneously, gallium ions are also implanted into the film, which induces compressive stress, and it is this stress differential across the first 20nm of the gold's film that determines the overall film deformation, the researchers report.

They were able to simplify the gold film's behaviour as a bilayer model for predictive modelling upon selective irradiation.


Showing macro-kirigami and nano-kirigami side by side. (A) Camera images of the paper kirigami process of an expandable dome. (B) SEM images of an 80-nm-thick gold film, a 2D concentric arc pattern and a 3D microdome. The high-dose FIB milling corresponds to the “cutting” process, and the global low-dose FIB irradiation of the sample area (enclosed by the dashed ellipse) corresponds to the “buckling” process in nano-kirigami. (C to F) A 12-blade propeller and (G to J) a four-arm pinwheel formed in a macroscopic paper and a gold nanofilm, respectively. Scale bars in SEM images is 1um. Credit: Liu et al., Sci. Adv. 2018;4: eaat4436.

Hence, with pre-programmed irradiation, the researchers were able to cut-out and 3D shape various kirigami patterns at the nanoscale. They expect this novel manufacturing technique to find applications in the design of functional structures for plasmonics, nanophotonics, optomechanics but also MEMS/NEMS, to name a few.

One example cited and demonstrated in the paper is the out-of-plane twisting through nano-kirigami, to yield unique electromagnetic properties such as 3D optical chirality. They leveraged giant optical chirality effects by designing arrays of 3D microscale pinwheel-like structures with a lattice periodicity of 1.45µm, observing distinct circular dichroism (causing different absorption losses for right-hand or left-hand circularly polarized light-waves.

Optical measurements showed that the circular dichroism spectra of LH and RH 3D pinwheel structures exhibited nearly opposite signs with similar amplitudes.

Massachusetts Institute of Technology – www.mit.edu

Related articles:
Graphene micro-supercapacitor powers flexible electronics
Origami technique turns flat optical sensors into hemispherical eyes
Modular robots fold like origami

Intel to aquire ASIC pioneer in programmable, IoT push

$
0
0
Semiconductor giant Intel Corp. (Santa Clara, CA) has said it plans to expand its programmable solutions business through the acquisition of eASIC Corp. (Santa Clara, CA).

eASICs is a 19-year old company with its own structured ASIC approach based on the use of a programmable silicon vias rather than the SRAM-based routing used in conventional FPGAs. The result means that die sizes can be reduced in compared to equivalent functionality FPGAs and cost and power saved, the company claims.

Intel did not disclose how much it is spending to acquire eASICs, which will join Intel's Programmable Solutions Group that was formed when Intel acquired Altera Corp. for $16.7 billion at the end of 2015.

A structured ASIC is an intermediary technology between FPGAs and ASICs. It offers performance and power-efficiency closer to a standard-cell ASIC, but with the faster design time and at a fraction of the non-recurring engineering costs associated with ASICs.

"Specifically, having a structured ASICs offering will help us better address high-performance and power-constrained applications that we see many of our customers challenged with in market segments like 4G and 5G wireless, networking and IoT. We can also provide a low-cost, automated conversion process from FPGAs, including competing FPGAs, to structured ASICs," said Dan McNamara, general manager of the programmable solutions group at Intel, in a statement.

"Longer term, we see an opportunity to architect a new class of programmable chip that takes advantage of Intel’s Embedded Multi-Die Interconnect Bridge (EMIB) technology to combine Intel FPGAs with structured ASICs in a system in package solution," McNamara said.

The deal is expected to close in 3Q18 after customary conditions are met.

www.intel.com

Related articles:
Intel CEO resigns over past relationship
Intel hires former Apple chip architect to lead silicon engineering

Self-driving tech startup emerges with 'human intuition' AI

$
0
0
Perceptive Automata (Somerville, MA), a provider of AI technology for autonomous vehicles that focuses on "understanding humans," has emerged from stealth mode and announced that it has raised $3 million in seed funding.

According to the company, it has solved "the hardest of the hard problems for autonomous vehicles" - the problem of understanding human behavior. To address it, the company has developed a software-based system that integrates pedestrian visibility, motion detection, and machine learning to predict pedestrian movement - in other words, "human intuition for self-driving cars."

"We've designed a model that can use the whole spectrum of subtle, unconscious insights that we, as humans, use to make incredibly sophisticated judgments about what’s going on in someone else’s head," says Sam Anthony, CTO and co-founder of Perceptive Automata. "You could say that, in a sense, our models develop their own human-like 'intuition.'"

Currently, says the company, instead of smoothly responding to a situation the way human drivers would, self-driving cars "act alternately 'paranoid' - i.e., timid, skittish, and easily startled - and oblivious." As a result, their erratic behavior is both frustrating for passengers and other drivers, as well as a reason why self-driving cars are often rear-ended.

To address this, the company takes image sensor data from vehicles that show interactions with people - pedestrians, bicyclists, and other motorists - and then shows the clips to groups of people that answer questions about the depicted pedestrian's (or motorist's, or cyclist's) level of intention and awareness based on what they are seeing. For example, one study respondent may think the pedestrian is waving at a car to go ahead, while another respondent might think the pedestrian is asking the car to stop and let them cross.

The process is repeated hundreds of thousands of times, with all sorts of interactions. Then, says the company, it uses that data to train models that interpret the world the way people do.

"Once trained, our deep learning models allow self-driving cars to understand and predict human behavior and, subsequently, react with human-like behaviors," says Anthony. "This has huge implications for safety, rider experience, and practical utility in the self-driving car industry."

Investors in Perceptive Automata include First Round Capital and Slow Ventures. According to the company, its human intuition AI module is already up and running in the vehicles of self-driving car companies around the world.

Perceptive Automata

Related articles:
Nvidia: Fully autonomous cars on the road in four years
Next-gen artificial perception better mimics human vision
Microsoft AI simulator expands to autonomous car research
AI technique helps robots learn by observing humans

NI, Spirent team on 5G NR test solution

$
0
0
Automated test and measurement systems provider National Instruments (Austin, TX) and telecommunications testing company Spirent Communications (Crawley, UK) have announced a collaboration to develop test systems for 5G New Radio (NR) devices.

The collaboration, say the companies, will allow 5G chipset and device manufacturers to validate the performance of 5G NR smartphones and IoT devices in the lab without requiring access to expensive and complex 5G base stations. As part of the arrangement, Spirent has adopted NI's flexible software defined radio (SDR) products in the development of its 5G performance solution.

Spirent's solution will use NI's USRP (Universal Software Radio Peripheral) devices and mmWave Transceiver System, and will include 5G NR test scenarios for mobile location, video, data, audio, and calling performance. Key architectural details of the solution include the use of NI's LabVIEW FPGA to emulate layer 1 through layer 3 of the 5G NR protocol stack.

"Building on the strength of NI's early success in 5G research and prototyping, combined with the modularity of its platform, accelerates initial interoperability and means our customers can feel confident that the platform can adapt to the evolving standards," says Rob VanBrunt, general manager of Spirent's Connected Devices business unit. "5G test engineers already recognize NI's off-the-shelf platform as the industry’s most flexible and powerful hardware available. Integrating their advanced signal processing capabilities into our 8100 platform enables an attractive upgrade path for our existing customers."

The new 5G performance test solution will include support for both sub-6-GHz and millimeter-wave radio bands and will integrate with Spirent's existing network emulation platform. The system will also feature up to 2 GHz of bandwidth, the company says.

"The marriage of our high-performance platform and Spirent's best-in-class test methodology for measuring the mobile user experience is exciting for the industry," says James Kimery, director of wireless research at NI. "Being able to assess the accuracy of cellular location in 5G environments and measuring the performance of video and data delivery are critical needs as 5G devices come on line starting in 2019."

NI
Spirent Communications

Related articles:
National Instruments, Samsung partner on test UEs for 5G NR
First 5G modem to usher in next-gen cellular
Intel 5G modem announced
Europe’s first 5G network went live in Berlin

2018-07-16-smart2zero-newsletter

$
0
0
Top Title: 
Smart2.0 Newsletter
Date: 
Monday, July 16, 2018
Left 1 Title: 
Top News
Left 2 title: 
Technologies to watch
Left 3 title: 
Products
Right 1 title: 
New Products for Smart Designs
Right 2 title: 
More products
Right 3 title: 
Technical Papers
Right 4 title: 
Events
Main title color: 
 
Sections titles color: 
 
News title color: 
 
Font Selector: 
Arial
Text banner position: 
1

Matlab accelerates deep learning applications on Nvidia chips

$
0
0
Matlab, one of the most widely used tools in engineers' toolbox, has now received an integration option for Nvidia's inference optimization software TensorRT. This makes it easier for users to develop new models for AI and Deep Learning in Matlab. Matlab provider Mathworks promises five times faster deep learning inference on Nvidia GPUs compared to TensorFlow.

The software meets the increasing requirements for applications in the embedded and automotive sectors. The connection to TensorRT is done via the software GPU Coder, which generates optimized code for graphics chips; applications are mainly seen in the areas of deep learning, embedded vision and autonomous systems.

Matlab provides a complete workflow for rapid training, validation and deployment of deep learning models. Engineers can use GPU resources without additional programming, allowing them to focus on their applications instead of optimizing performance. New integration of Nvidia TensorRT and GPU Coder enables deep learning models developed in Matlab to run on Nvidia GPUs with high throughput and low latency. Internal benchmarks show that CUDA code generated by Matlab in combination with TensorRT Alexnet can deliver five times more power and VGG-16 1.25 times more power than the deep learning reference of the corresponding networks in TensorFlow.

Mathworks will showcase the new software at the GPU Technology Conference from October 9 to 11 in Munich.

More information: https://uk.mathworks.com/solutions/deep-learning.html

Autonomous robot evolution project eyes extreme environments

$
0
0
Researchers are working on a project looking at the automated evolution of robot designs for extreme environments such as nuclear reactors.

The four year Autonomous Robot Evolution (ARE) project sees the Bristol Robotics Lab (BRL) working with the University of York, Edinburgh Napier and the Free University of Amsterdam as well as R&D lab EPFL in Zurich and NASA’s Jet propulsion Lab.

The project will develop a demonstrator that 3D prints and assembles complete robots and then trains them in a ‘nursery’ then testing them in a mock nuclear plant. The whole process is fully automatic, allowing the designs to evolve.

“We've been trying to win support for this project for five years or so, and only now succeeded. This is a project that we've been thinking, and writing about, for a long time - so to have the opportunity to try out our ideas for real is wonderful,” said Alan Winfield, professor of robot ethics at the University of the West of England which hosts BRL.

BRL is developing of a purpose designed 3D printing system, which it calls a ‘birth clinic’, to print small mobile robots.  This will need to pick and place a number of pre designed and fabricated electronics, sensing and actuation modules into the printing work area which will be over printed with hot plastic to form the complete robot.

After testing in the mock reactor, the designs of the most effective robots will be combined to create the next generation of ‘child' robots. Even with 3D printing this is a slow and resource hungry process, says Winfield, so the project is also running a parallel process of simulated evolution in a virtual environment developed at York along with the algorithms for evolving the designs so that the real world environment is used to calibrate the virtual world.

A hybrid real-virtual process under the control of ecosystem manager software developed at Napier will allow real and virtual robots to combine, and the resulting child robots to be printed and tested in either the virtual or real environments.

BRL will also be integrating all the components of the system into a demonstrator, including the real world birth clinic, nursery, and mock nuclear environment with the virtual environment and the ecosystem manage and then undertaking the evaluation and analysis.

www.brl.ac.uk

 

Advanced plant control and estimation platform upgraded

$
0
0
Yokogawa Electric Corporation and Shell have jointly developed the R5.02 Platform for Advanced Control and Estimation.

The platform is a software suite that brings together advanced plant process control technology developed by Shell and the real-time control technology from Yokogawa.

The upgrade from version R5.01 provides enhanced functionality that optimize the control of operations throughout plants, maintain high system availability, reduce operator workload, and improve productivity.

The Platform for Advanced Control and Estimation suite is positioned as a solution in Yokogawa’s OpreX Asset Operations and Optimization family.

www.yokogawa.com/

Related news:
Silicon Mobility: Field Programmable Control Unit​
Mouser now stocking Infineon iMOTION IMC100 motor control ICs
Researchers attempt to control robots using brain power alone
Fanless DIN-rail embedded PC is optimised for control cabinets

IBM, Telit team on global industrial IoT deployments

$
0
0
Telit has announced that its deviceWISE IoT platform is now fully interoperable with the IBM IoT platform. Working together, the two companies will help manufacturers and other businesses to minimize the cost, risk, complexity and lead time of deploying solutions for monitoring and control, industrial automation, asset tracking and field service operations.

The combination of the IBM Watson IoT platform and deviceWISE IoT platform gives manufacturers and other industrial businesses powerful new options for near-instant onboarding of industrial products, systems and assets, and applying advanced analytics, artificial intelligence and application development.

Further, deviceWISE streamlines the process of integrating industrial IoT (IIoT) devices and applications by providing a large library of native device drivers and industrial protocols. This eliminates the need for custom coding and other expensive, time-consuming integration tasks, so businesses can add value to their IIoT bottom line and realize competitive benefits even faster.

The IBM Watson IoT platform, available on the IBM Cloud, brings real-time insights to IoT data and powerful Artificial Intelligence (AI)-based capabilities including Cognitive Analysis, Machine Learning, and Natural Language Processing.

www.telit.com

Massive MIMO market to expand at 39% CAGR to 2026

$
0
0
Valued at $1,051.8 million in 2017, the global Massive MIMO market is projected to expand at a compound annual growth rate (CAGR) of more than 39% from 2018 to 2026, according to a new report published by Transparency Market Research (TMR). Advancements in 4.5G technology, improved SNR (signal to noise ratio) and link reliability and expected launch of 5G in 2019 are primary factors that are expected to boost the market during the forecast period.

There are already strategies and projects in place for smart cities, self-driving cars are on the horizon, social media has grown at a vast scale, and Internet of Things (IoT) devices and systems are taking off. Significant adoption of 4G LTE in all these arenas is generating tremendous amounts of data and the expected launch of 5G is poised to connect billions of devices in the coming years.

The basic idea behind Massive MIMO framework is the use of multiple antenna arrays at base station to reduce error rate and increase efficiency which results in the overall system becoming more complex and large than traditional systems. The complex structure makes it difficult to diagnose the problems occurring in the Massive MIMO system. Furthermore, as a result of use of huge number of antenna systems and complexity, Massive MIMO systems are far more expensive than traditional antenna systems.

In terms of technology, the Massive MIMO technology market is segmented into LTE advanced, LTE advanced pro, and 5G. In 2017, LTE advanced segment accumulated the highest share of the market and is projected to witness significant CAGR over the forecast period. LTE advanced supports data rate up to 1 Gbps and in turn improves the customer experience.

Also, 5G cellular networks are expected to be commercially deployed by the end of 2019. However, some of the global leading network providers are upgrading their existing infrastructure with 5G-ready network systems which can support both existing and 5G networks.

The market in Asia Pacific is expected to expand at a considerable CAGR during the forecast period. This growth is mainly due to the presence of well-established market players such as ZTE Corporation and Huawei Technologies Co., Ltd.

Also, the growth is fueled by significant economies such as China, Japan, and India. Furthermore, players from the region are establishing partnerships with various local players for market expansion as well as technology advancement. For instance, ZTE Corporation has been teaming up with various regional players and network operators for 5G trials.

In October 2017, ZTE, in collaboration with Japanese carrier SoftBank Group Corp, achieved 1Gbps topmost data rate speed in pre-5G, a Massive MIMO (multiple in multiple out) trial. Additionally, in February 2017, ZTE, in partnership with Smartfren, successfully completed Pre5G Massive MIMO tests at Teras Kota shopping mall in Jakarta.

www.transparencymarketresearch.com

Viewing all 7377 articles
Browse latest View live