Canada-based Cedar Lane Technologies sues Huawei over infringement on 7 US patents, 3 of which describe image sensor data transmission schemes: 6,473,527; 6,972,790; and 8,537,242.
↧
Cedar Lane Technologies Sues Huawei over CIS Data Transmission Patents
↧
How many photons does it take to form an image?
ResearchGate: Glasgow University, UK, paper "How many photons does it take to form an image?" by Steven D. Johnson, Paul-Antoine Moreau, Thomas Gregory, and Miles J. Padgetta tries to answer on a somewhat philosophical question:
"If a picture tells a thousand words, then we might ask ourselves how many photons does it take to form a picture? In terms of the transmission of the picture information, then the multiple degrees of freedom (e.g., wavelength, polarization, and spatial mode) of the photon mean that high amounts of information can be encoded such that the many pixel values of an image can, in principle, be communicated by a single photon. However, the number of photons required to transmit the image information is not necessarily, at least technically, the same as the number of photons required to image an object. Therefore, another equally important question is how many photons does it take to measure an unknown image?
For intensity images, it seems that one detected photon per image pixel is a realistic guide, but this may be reduced by making further assumptions on the sparsity of an image in a chosen basis, such as spatial frequency. In this last respect, the advent of machine learning, knowledge-based reconstruction, and similar techniques alleviates the need for a user to explicitly define the sparse basis, but rather the prior is determined from a library of previously recorded images of a similar type. This machine learnt prior can then potentially be designed into the optimum measurement strategy. It seems likely therefore that future imaging systems will combine state-of-the-art single photon detectors with knowledge-based processing both in the design of the system itself and in the processing of the collected data to yield images or decisions based on these data on the basis of extremely low numbers of photons, potentially well below one photon per image pixel."
Once we are at single-photon imaging, the International SPAD Sensor Workshop (ISSW 2020) held as an on-line event in June published a nice SPAD photon-accumulation video of the city of Edinburgh:
"If a picture tells a thousand words, then we might ask ourselves how many photons does it take to form a picture? In terms of the transmission of the picture information, then the multiple degrees of freedom (e.g., wavelength, polarization, and spatial mode) of the photon mean that high amounts of information can be encoded such that the many pixel values of an image can, in principle, be communicated by a single photon. However, the number of photons required to transmit the image information is not necessarily, at least technically, the same as the number of photons required to image an object. Therefore, another equally important question is how many photons does it take to measure an unknown image?
For intensity images, it seems that one detected photon per image pixel is a realistic guide, but this may be reduced by making further assumptions on the sparsity of an image in a chosen basis, such as spatial frequency. In this last respect, the advent of machine learning, knowledge-based reconstruction, and similar techniques alleviates the need for a user to explicitly define the sparse basis, but rather the prior is determined from a library of previously recorded images of a similar type. This machine learnt prior can then potentially be designed into the optimum measurement strategy. It seems likely therefore that future imaging systems will combine state-of-the-art single photon detectors with knowledge-based processing both in the design of the system itself and in the processing of the collected data to yield images or decisions based on these data on the basis of extremely low numbers of photons, potentially well below one photon per image pixel."
Once we are at single-photon imaging, the International SPAD Sensor Workshop (ISSW 2020) held as an on-line event in June published a nice SPAD photon-accumulation video of the city of Edinburgh:
↧
↧
Unispectral Announces Tunable NIR Filter for Multipectral Cameras
PRNewswire: Unispectral announces what it calls the industry’s first mass market ColorIR Tunable NIR filter and spectral IR camera. Unispectral’s tunable filter turns low cost IR cameras into 700-950nm spectral cameras. It is best suited for facial recognition, consumer portable devices, IOT, robotics and mass market cameras. ColorIR products enable advanced machine vision, material sensing and computational photography.
The core product consists of a tunable MEMS filter assembled on a camera module. RaspberryPi is used to capture parameters and interface by USB or WiFi toPC or Mobile device. SDK is included to develop additional applications.
“Our excellent team is proud to roll out this tunable filter which connects seeing with sensing. It makes spectral cameras accessible for mass-market platforms. The market strives to find an effective solution for adding spectral information to cameras and we believe our technology offers the best blend of performance, and cost,” said Ariel Raz, CEO of Unispectral.
The ColorIR camera captures multiple frames in different NIR wavelengths, filtered by a miniature Fabry–Pérot optical cavity MEMS element. This unique solution breaks the price for legacy spectral cameras, thereby enabling new markets and use cases.
Use Cases of ColorIR:
The ColorIR tunable Mems EVK is available for pre-order. Shipping is planned for end of July.
The core product consists of a tunable MEMS filter assembled on a camera module. RaspberryPi is used to capture parameters and interface by USB or WiFi toPC or Mobile device. SDK is included to develop additional applications.
“Our excellent team is proud to roll out this tunable filter which connects seeing with sensing. It makes spectral cameras accessible for mass-market platforms. The market strives to find an effective solution for adding spectral information to cameras and we believe our technology offers the best blend of performance, and cost,” said Ariel Raz, CEO of Unispectral.
The ColorIR camera captures multiple frames in different NIR wavelengths, filtered by a miniature Fabry–Pérot optical cavity MEMS element. This unique solution breaks the price for legacy spectral cameras, thereby enabling new markets and use cases.
Use Cases of ColorIR:
- Security Market: Facial Authentication, Access Control, Payment Terminals, Fake Bills detection,
- Smartphone Camera: image enhancement, , , low light and shadow picture corrections,
- Medical Market: Remote health inspection
- Agriculture: Fruit inspection, Pesticide detection
- Industrial: Production line Inspection,
- Vehicle: DMS
The ColorIR tunable Mems EVK is available for pre-order. Shipping is planned for end of July.
↧
Microsoft Develops Under-Display Camera Solution for Videoconferencing
Microsoft Research works on embedding a camera under a display for videoconferencing:
"From the earliest days of videoconferencing it was recognized that the separation of the camera and the display meant the system could not convey gaze awareness accurately. Videoconferencing systems remain unable to recreate eye contact—a key element of effective communication.
Locating the camera above the display results in a vantage point that’s different from a face-to-face conversation, especially with large displays, which can create a sense of looking down on the person speaking.
Worse, the distance between the camera and the display mean that the participants will not experience a sense of eye contact. If I look directly into your eyes on the screen, you will see me apparently gazing below your face. Conversely, if I look directly into the camera to give you a sense that I am looking into your eyes, I’m no longer in fact able to see your eyes, and I may miss subtle non-verbal feedback cues."
"With transparent OLED displays (T-OLED), we can position a camera behind the screen, potentially solving the perspective problem. But because the screen is not fully transparent, looking through it degrades image quality by introducing diffraction and noise.
To compensate for the image degradation inherent in photographing through a T-OLED screen, we used a U-Net neural-network structure that both improves the signal-to-noise ratio and de-blurs the image.
We were able to achieve a recovered image that is virtually indistinguishable from an image that was photographed directly."
Via MSPowerUser
"From the earliest days of videoconferencing it was recognized that the separation of the camera and the display meant the system could not convey gaze awareness accurately. Videoconferencing systems remain unable to recreate eye contact—a key element of effective communication.
Locating the camera above the display results in a vantage point that’s different from a face-to-face conversation, especially with large displays, which can create a sense of looking down on the person speaking.
Worse, the distance between the camera and the display mean that the participants will not experience a sense of eye contact. If I look directly into your eyes on the screen, you will see me apparently gazing below your face. Conversely, if I look directly into the camera to give you a sense that I am looking into your eyes, I’m no longer in fact able to see your eyes, and I may miss subtle non-verbal feedback cues."
"With transparent OLED displays (T-OLED), we can position a camera behind the screen, potentially solving the perspective problem. But because the screen is not fully transparent, looking through it degrades image quality by introducing diffraction and noise.
To compensate for the image degradation inherent in photographing through a T-OLED screen, we used a U-Net neural-network structure that both improves the signal-to-noise ratio and de-blurs the image.
We were able to achieve a recovered image that is virtually indistinguishable from an image that was photographed directly."
Via MSPowerUser
↧
Pet Nose-Print Recognition Technology
CnTechPost: Chinese Alipay insurance platform has announced the opening of pet nose-print recognition technology and has joined forces with insurers to apply this technology to dogs and cats insurance for the first time.
According to Alipay, the success rate of pet nose-print recognition technology exceeds 99% and is expected to be applied to urban pet management and lost pet scenarios in the future.
According to Alipay, the success rate of pet nose-print recognition technology exceeds 99% and is expected to be applied to urban pet management and lost pet scenarios in the future.
↧
↧
Resolving Fast Movement in Low Light with QIS
Purdue University publishes its paper presented at 16th European Conference on Computer Vision (ECCV) 2020 "Dynamic Low-light Imaging with Quanta Image Sensors" by Yiheng Chi, Abhiram Gnanasambandam, Vladlen Koltun, and Stanley H. Chan.
"Imaging in low light is difficult because the number of photons arriving at the sensor is low. Imaging dynamic scenes in low-light environments is even more difficult because as the scene moves, pixels in adjacent frames need to be aligned before they can be denoised. Conventional CMOS image sensors (CIS) are at a particular disadvantage in dynamic low-light settings because the exposure cannot be too short lest the read noise overwhelms the signal. We propose a solution using Quanta Image Sensors (QIS) and present a new image reconstruction algorithm. QIS are single-photon image sensors with photon counting capabilities. Studies over the past decade have confirmed the effectiveness of QIS for low-light imaging but reconstruction algorithms for dynamic scenes in low light remain an open problem. We fill the gap by proposing a student-teacher training protocol that transfers knowledge from a motion teacher and a denoising teacher to a student network. We show that dynamic scenes can be reconstructed from a burst of frames at a photon level of 1 photon per pixel per frame. Experimental results confirm the advantages of the proposed method compared to existing methods."
"Imaging in low light is difficult because the number of photons arriving at the sensor is low. Imaging dynamic scenes in low-light environments is even more difficult because as the scene moves, pixels in adjacent frames need to be aligned before they can be denoised. Conventional CMOS image sensors (CIS) are at a particular disadvantage in dynamic low-light settings because the exposure cannot be too short lest the read noise overwhelms the signal. We propose a solution using Quanta Image Sensors (QIS) and present a new image reconstruction algorithm. QIS are single-photon image sensors with photon counting capabilities. Studies over the past decade have confirmed the effectiveness of QIS for low-light imaging but reconstruction algorithms for dynamic scenes in low light remain an open problem. We fill the gap by proposing a student-teacher training protocol that transfers knowledge from a motion teacher and a denoising teacher to a student network. We show that dynamic scenes can be reconstructed from a burst of frames at a photon level of 1 photon per pixel per frame. Experimental results confirm the advantages of the proposed method compared to existing methods."
↧
Gene Weckler Passed Away
Gene Peter Weckler died of complications from Alzheimer’s on December 3, 2019. He was 87 years old.
Among his significant contributions to image sensor technology, in 1967 Gene published a seminal paper entitled: “Operation of pn junction photodetectors in a photon flux integrating mode,” which was published in the IEEE J. Solid-State Circuits. Nearly every image sensor built since then has operated in this mode. Gene also published several early papers on what we now call passive pixel image sensors during his time at Fairchild.
In 1971 he co-founded RETICON to further commercialize the technology. RETICON was acquired by EG&G in 1977. Gene stayed with EG&G for twenty years serving in many management roles including Director of Technology for the Opto Divisions. In 1997 Gene co-founded Rad-icon to commercialize the use of CMOS-based solid-state image sensors for use in x-ray imaging. Rad-icon was acquired by DALSA in 2008. Gene retired from full time work in 2009 but continued as a member of the Advisory Board for the College of Engineering at Utah State University.
In 2013, Gene Weckler received International Image Sensor Society (IISS) Exceptional Lifetime Achievement Award.
An oral history recording can be found here: http://www.semiconductormuseum.com/Transistors/ShockleyTransistor/OralHistories/Weckler/Weckler_Index.htm
Among his significant contributions to image sensor technology, in 1967 Gene published a seminal paper entitled: “Operation of pn junction photodetectors in a photon flux integrating mode,” which was published in the IEEE J. Solid-State Circuits. Nearly every image sensor built since then has operated in this mode. Gene also published several early papers on what we now call passive pixel image sensors during his time at Fairchild.
In 1971 he co-founded RETICON to further commercialize the technology. RETICON was acquired by EG&G in 1977. Gene stayed with EG&G for twenty years serving in many management roles including Director of Technology for the Opto Divisions. In 1997 Gene co-founded Rad-icon to commercialize the use of CMOS-based solid-state image sensors for use in x-ray imaging. Rad-icon was acquired by DALSA in 2008. Gene retired from full time work in 2009 but continued as a member of the Advisory Board for the College of Engineering at Utah State University.
In 2013, Gene Weckler received International Image Sensor Society (IISS) Exceptional Lifetime Achievement Award.
An oral history recording can be found here: http://www.semiconductormuseum.com/Transistors/ShockleyTransistor/OralHistories/Weckler/Weckler_Index.htm
↧
AMOLED Displays with In-Pixel Photodetector
Intechopen publishes a book chapter "AMOLED Displays with In-Pixel Photodetector" by By Nikolaos Papadopoulos, Pawel Malinowski, Lynn Verschueren, Tung Huei Ke, Auke Jisk Kronemeijer, Jan Genoe, Wim Dehaene, and Kris Myny from Imec.
"The focus of this chapter is to consider additional functionalities beyond the regular display function of an active matrix organic light-emitting diode (AMOLED) display. We will discuss how to improve the resolution of the array with OLED lithography pushing to AR/VR standards. Also, the chapter will give an insight into pixel design and layout with a strong focus on high resolution, enabling open areas in pixels for additional functionalities. An example of such additional functionalities would be to include a photodetector in pixel, requiring the need to include in-panel TFT readout at the peripherals of the full-display sensor array for applications such as finger and palmprint sensing."
Meanwhile, Vkansee works with China-based Tianma to productize its on-dusplay optical fingerprint sensor:
"Vkansee’s proprietary Matrix Pinhole Image Sensing (MAPIS) – is integrated into the mobile phone OLED display panel, effectively turning the entire display into a high-resolution fingerprint lens allowing simple installation of the image sensor anywhere or everywhere under the OLED display screen. Unlike other solutions that implement FOD and yield poor quality fingerprint images, the MAPIS OLED solution captures high-quality images, because the in-panel optical design avoids the influence of obstructing TFT driver circuits."
“We are focused on bringing our novel MAPIS optical fingerprinting technology to users across the globe to improve security and convenience, and hope to make MAPIS optics as a standard design of OLED,” stated Jason Chaikin, President of VKANSEE. “In partnership with Tianma, we’re confident this will happen in the near future. We believe integrating the MAPIS optics into the OLED screen will greatly change the fingerprint sensor industry similar to the history of integrating touch sensing into the OLED screen.”
"The focus of this chapter is to consider additional functionalities beyond the regular display function of an active matrix organic light-emitting diode (AMOLED) display. We will discuss how to improve the resolution of the array with OLED lithography pushing to AR/VR standards. Also, the chapter will give an insight into pixel design and layout with a strong focus on high resolution, enabling open areas in pixels for additional functionalities. An example of such additional functionalities would be to include a photodetector in pixel, requiring the need to include in-panel TFT readout at the peripherals of the full-display sensor array for applications such as finger and palmprint sensing."
Meanwhile, Vkansee works with China-based Tianma to productize its on-dusplay optical fingerprint sensor:
"Vkansee’s proprietary Matrix Pinhole Image Sensing (MAPIS) – is integrated into the mobile phone OLED display panel, effectively turning the entire display into a high-resolution fingerprint lens allowing simple installation of the image sensor anywhere or everywhere under the OLED display screen. Unlike other solutions that implement FOD and yield poor quality fingerprint images, the MAPIS OLED solution captures high-quality images, because the in-panel optical design avoids the influence of obstructing TFT driver circuits."
“We are focused on bringing our novel MAPIS optical fingerprinting technology to users across the globe to improve security and convenience, and hope to make MAPIS optics as a standard design of OLED,” stated Jason Chaikin, President of VKANSEE. “In partnership with Tianma, we’re confident this will happen in the near future. We believe integrating the MAPIS optics into the OLED screen will greatly change the fingerprint sensor industry similar to the history of integrating touch sensing into the OLED screen.”
↧
Thesis on SPAD Integration into 28nm SOI Process
INL - Institut des Nanotechnologies de Lyon, France, publishes a PhD Thesis "Integration of Single Photon Avalanche Diodes in Fully Depleted Silicon-on-Insulator Technology" by Tulio Chaves de Albuquerque. It starts with a nice introduction into generic SPAD technology and then goes into its integration into FDSOI process.
↧
↧
Assorted News: ST, Sony, ON Semi, ASE, Photon Force
GlobeNewswire: ST reports that Aura Aware is using ST’s FlightSense technology in a smart distance-awareness portable device suitable for use at retail counters and check-in desks. The easy-to-setup device displays a green OK signal that changes to red if a person crosses a safe minimum-distance threshold.
AnatndTech reports that after more than four years of being acquired by Sony, Altair Semiconductor is renaming itself as Sony Semiconductor Israel. The AI inference processor that’s been integrated into the new IMX500/501 sensors, was developed by Altair/Sony Semiconductor Israel.
"We have been honored to be part of Sony for the past four years, playing a key role in the company’s core business,” says Sony Semiconductor Israel CEO Nohik Semel, “To better reflect our long-term commitment to our partners and customers, as well as the quality of our offering, we have decided to change Altair’s company name to Sony.”
ON Semi publishes a promotional video about robotic vision applications:
Digitimes reports that ASE starts mass production of LiDAR modules in 2H2020: Taiwan's backend house ASE Technology is expected to start volume production of LiDAR modules in the second half of 2020 as it has indirectly entered supply chains of first-tier automakers through its international clients. ASE Technology is said to aggressively incorporate AI technology to support smart production of ToF LiDARs.
Talking about LiDARs, Forbes contributor Sabbir Rangwala publishes a comparison table of possible spots for LiDAR in a car:
Edinburgh, UK-based Photon Force, a provider of time-resolved SPAD cameras, has received a Business Start-Up Award from the Institute of Physics (IOP).
"Founded in 2015 as a spin-out from Robert Henderson’s renowned CMOS Sensors and Systems Group at the University of Edinburgh, Photon Force has won the IOP accolade for the development of its ground-breaking sensors that enable ultrafast, single photon sensitive imaging. Photon Force sensors are used worldwide and facilitate progress in applications including quantum physics, communications and biomedical imaging/neuroscience."
AnatndTech reports that after more than four years of being acquired by Sony, Altair Semiconductor is renaming itself as Sony Semiconductor Israel. The AI inference processor that’s been integrated into the new IMX500/501 sensors, was developed by Altair/Sony Semiconductor Israel.
"We have been honored to be part of Sony for the past four years, playing a key role in the company’s core business,” says Sony Semiconductor Israel CEO Nohik Semel, “To better reflect our long-term commitment to our partners and customers, as well as the quality of our offering, we have decided to change Altair’s company name to Sony.”
ON Semi publishes a promotional video about robotic vision applications:
Digitimes reports that ASE starts mass production of LiDAR modules in 2H2020: Taiwan's backend house ASE Technology is expected to start volume production of LiDAR modules in the second half of 2020 as it has indirectly entered supply chains of first-tier automakers through its international clients. ASE Technology is said to aggressively incorporate AI technology to support smart production of ToF LiDARs.
Talking about LiDARs, Forbes contributor Sabbir Rangwala publishes a comparison table of possible spots for LiDAR in a car:
Edinburgh, UK-based Photon Force, a provider of time-resolved SPAD cameras, has received a Business Start-Up Award from the Institute of Physics (IOP).
"Founded in 2015 as a spin-out from Robert Henderson’s renowned CMOS Sensors and Systems Group at the University of Edinburgh, Photon Force has won the IOP accolade for the development of its ground-breaking sensors that enable ultrafast, single photon sensitive imaging. Photon Force sensors are used worldwide and facilitate progress in applications including quantum physics, communications and biomedical imaging/neuroscience."
↧
Development of Reliable WLCSP for Automotive Applications
MDPI paper "Development of Reliable, High Performance WLCSP for BSI CMOS Image Sensor for Automotive Application" by Tianshen Zhou, Shuying Ma, Daquan Yu, Ming Li, and Tao Hang from Shanghai Jiao Tong University, Xiamen University, and Huatian Technology (Kunshan) Electronics belongs to a Special Issue "Smart Image Sensors."
"To meet the urgent market demand for small package size and high reliability performance for automotive CMOS image sensor (CIS) application, wafer level chip scale packaging (WLCSP) technology using through silicon vias (TSV) needs to be developed to replace current chip on board (COB) packages. In this paper, a WLCSP with the size of 5.82 mm × 5.22 mm and thickness of 850 μm was developed for the backside illumination (BSI) CIS chip using a 65 nm node with a size of 5.8 mm × 5.2 mm. The packaged product has 1392 × 976 pixels and a resolution of up to 60 frames per second with more than 120 dB dynamic range. The structure of the 3D package was designed and the key fabrication processes on a 12” inch wafer were investigated. More than 98% yield and excellent optical performance of the CIS package was achieved after process optimization. The final packages were qualified by AEC-Q100 Grade 2."
"To meet the urgent market demand for small package size and high reliability performance for automotive CMOS image sensor (CIS) application, wafer level chip scale packaging (WLCSP) technology using through silicon vias (TSV) needs to be developed to replace current chip on board (COB) packages. In this paper, a WLCSP with the size of 5.82 mm × 5.22 mm and thickness of 850 μm was developed for the backside illumination (BSI) CIS chip using a 65 nm node with a size of 5.8 mm × 5.2 mm. The packaged product has 1392 × 976 pixels and a resolution of up to 60 frames per second with more than 120 dB dynamic range. The structure of the 3D package was designed and the key fabrication processes on a 12” inch wafer were investigated. More than 98% yield and excellent optical performance of the CIS package was achieved after process optimization. The final packages were qualified by AEC-Q100 Grade 2."
↧
Smartsens SC500AI Sensor Improves Read Noise to 0.63e-
PRNewswire: SmartSens announces the SC500AI widescreen smart image sensor. The majority of 5MP security camera sensors offer a 4:3 aspect ratio that is not optimized for the widescreen format of modern LCD displays. The SmartSens SC500AI addresses this shortcoming with 1620P 16:9 5MP video output in the same form factor.
In a side-by-side comparison with SmartSens' previous generation sensor, the new SC500AI reduces the dark current from 389 e- at 80˚C to 210 e-. The total RN, or Read Noise, is reduced from 0.75e- to 0.63e-. And the sensitivity level also shows a noticeable improvement, growing from 2800 mV/lux-sec to 3680 mV/lux-sec.
These improvements are made possible by SmartSens' unique SFCPixel technology, which takes advantage of the close proximity between the source follower and the photodiodes to increase the sensitivity level, producing high-quality night-vision images. SmartSens' proprietary PixGain technology additionally enables the sensor to achieve excellent HDR performance even under glaring sunlight.
Existing customers of SmartSens' previous-generation products of P2P will see a hassle-free system upgrade to the SC500AI, which is compatible with 1/3-inch 5MP lenses in a wide array for professional security products.
"SmartSens continues to build on its AI series of sensors, offering our customers a range of solutions utilizing the latest photosensor technologies to address the most challenging lighting conditions," said Chris Yu, Chief Marketing Officer of SmartSens. "We continue to strengthen our portfolio to match new applications and our customers' quickly-evolving needs."
The SC500AI Image Sensor is available for testing immediately.
In a side-by-side comparison with SmartSens' previous generation sensor, the new SC500AI reduces the dark current from 389 e- at 80˚C to 210 e-. The total RN, or Read Noise, is reduced from 0.75e- to 0.63e-. And the sensitivity level also shows a noticeable improvement, growing from 2800 mV/lux-sec to 3680 mV/lux-sec.
These improvements are made possible by SmartSens' unique SFCPixel technology, which takes advantage of the close proximity between the source follower and the photodiodes to increase the sensitivity level, producing high-quality night-vision images. SmartSens' proprietary PixGain technology additionally enables the sensor to achieve excellent HDR performance even under glaring sunlight.
Existing customers of SmartSens' previous-generation products of P2P will see a hassle-free system upgrade to the SC500AI, which is compatible with 1/3-inch 5MP lenses in a wide array for professional security products.
"SmartSens continues to build on its AI series of sensors, offering our customers a range of solutions utilizing the latest photosensor technologies to address the most challenging lighting conditions," said Chris Yu, Chief Marketing Officer of SmartSens. "We continue to strengthen our portfolio to match new applications and our customers' quickly-evolving needs."
The SC500AI Image Sensor is available for testing immediately.
↧
ST Reports Decline in Imaging Sales
SeekingAlpha publishes ST Q2 earnings call with updates on the company's imaging business:
"Net revenues were $2.09 billion, down 6.5% on a sequential basis. As expected, this was due to a decline in Automotive, Analog and Imaging products, partially offset by growth in Microcontrollers, Digital and Power Discrete.
Net revenues decreased 4% year-over-year with lower sales in Imaging, Automotive and MEMS, partially offset by higher sales in microcontrollers, digital, analog and power discrete.
AMS (Division] revenues decreased 10.1% with MEMS and Imaging lower while Analog sales were higher.
During the quarter, we also won sockets for our Global Shutter Automotive Imaging Solution for driver monitoring systems from two major OEMs. And this is an important step in our diversification strategy related to optical sensing solutions."
"Net revenues were $2.09 billion, down 6.5% on a sequential basis. As expected, this was due to a decline in Automotive, Analog and Imaging products, partially offset by growth in Microcontrollers, Digital and Power Discrete.
Net revenues decreased 4% year-over-year with lower sales in Imaging, Automotive and MEMS, partially offset by higher sales in microcontrollers, digital, analog and power discrete.
AMS (Division] revenues decreased 10.1% with MEMS and Imaging lower while Analog sales were higher.
During the quarter, we also won sockets for our Global Shutter Automotive Imaging Solution for driver monitoring systems from two major OEMs. And this is an important step in our diversification strategy related to optical sensing solutions."
↧
↧
LiDAR News: Ibeo, Velodyne, Hesai, Cepton, Luminar, SiLC
Ibeo publishes a webinar explaining the company's approach to the solid-state LiDAR:
In another webinar, Ibeo presents its view on AI challenges and solutions in autonomous driving.
BusinessWire, BusinessWire: Velodyne announces a long-term global licensing agreement with Hesai Photonics Technology encompassing 360° surround-view lidar sensors. As a result of this agreement, Velodyne and Hesai have agreed to dismiss current legal proceedings in the U.S., Germany and China that exist between the two companies.
“We think this agreement will expand the adoption of lidar world-wide and save lives,” says David Hall, Velodyne Founder and Chairman of the Board. The relationship with Hesai is the third major licensing agreement for Velodyne’s lidar technology.
BusinessWire: Cepton hires Andrew Klaus as country manager for Japan who used to work at the same position for Innoviz.
Earlier this year, Cepton concluded a successful Series C financing round led by Koito Manufacturing, the world’s largest automotive lighting Tier 1. Around the same time, Cepton expanded its business team in Europe with the appointment of two Directors of Product Management and Marketing.
BusinessWire: Luminar announces an expansion of its leadership as it drives into its next phase of growth in automotive. Over the next 18 months, the company is scaling its technology into series production, starting with Volvo in 2022, and will begin shipping its Iris sensing and perception platform within the year. Luminar hired 5 new executives from ZF, Mobileye, Magic Leap, and Goldman Sachs.
PRNewswire: SiLC, a developer of single-chip FMCW LiDAR, announces that Frost & Sullivan has selected the company for its 2020 North American 3D/4D LiDAR Imaging Industry Technology Innovation Award. This recognition comes on the heels of SiLC being selected by EETimes as one of the emerging silicon startups to watch worldwide. According to the Frost & Sullivan report, SiLC's proprietary 4D LIDAR chip is ideally positioned to disrupt the global LIDAR market due to its unique capabilities with broad applications, including autonomous vehicles, machine vision, and augmented reality.
"We're delighted by this recognition and greatly appreciate the depth and quality of Frost & Sullivan's analysis of SiLC's breakthrough Smart 4D Vision Sensor technology," said Mehdi Asghari, CEO of SiLC. "We also appreciate that Frost & Sullivan highlighted the breadth of our technology, which has the potential to replace ToF-based LiDAR sensors used in applications from automotive advanced driver assistance systems (ADAS) and self-driving autonomous vehicles to augmented reality, security, and industrial machine vision."
In another webinar, Ibeo presents its view on AI challenges and solutions in autonomous driving.
BusinessWire, BusinessWire: Velodyne announces a long-term global licensing agreement with Hesai Photonics Technology encompassing 360° surround-view lidar sensors. As a result of this agreement, Velodyne and Hesai have agreed to dismiss current legal proceedings in the U.S., Germany and China that exist between the two companies.
“We think this agreement will expand the adoption of lidar world-wide and save lives,” says David Hall, Velodyne Founder and Chairman of the Board. The relationship with Hesai is the third major licensing agreement for Velodyne’s lidar technology.
BusinessWire: Cepton hires Andrew Klaus as country manager for Japan who used to work at the same position for Innoviz.
Earlier this year, Cepton concluded a successful Series C financing round led by Koito Manufacturing, the world’s largest automotive lighting Tier 1. Around the same time, Cepton expanded its business team in Europe with the appointment of two Directors of Product Management and Marketing.
BusinessWire: Luminar announces an expansion of its leadership as it drives into its next phase of growth in automotive. Over the next 18 months, the company is scaling its technology into series production, starting with Volvo in 2022, and will begin shipping its Iris sensing and perception platform within the year. Luminar hired 5 new executives from ZF, Mobileye, Magic Leap, and Goldman Sachs.
PRNewswire: SiLC, a developer of single-chip FMCW LiDAR, announces that Frost & Sullivan has selected the company for its 2020 North American 3D/4D LiDAR Imaging Industry Technology Innovation Award. This recognition comes on the heels of SiLC being selected by EETimes as one of the emerging silicon startups to watch worldwide. According to the Frost & Sullivan report, SiLC's proprietary 4D LIDAR chip is ideally positioned to disrupt the global LIDAR market due to its unique capabilities with broad applications, including autonomous vehicles, machine vision, and augmented reality.
"We're delighted by this recognition and greatly appreciate the depth and quality of Frost & Sullivan's analysis of SiLC's breakthrough Smart 4D Vision Sensor technology," said Mehdi Asghari, CEO of SiLC. "We also appreciate that Frost & Sullivan highlighted the breadth of our technology, which has the potential to replace ToF-based LiDAR sensors used in applications from automotive advanced driver assistance systems (ADAS) and self-driving autonomous vehicles to augmented reality, security, and industrial machine vision."
↧
4D Light-in-Flight Imaging with SPADs
EPFL and Canon paper "Superluminal Motion-Assisted 4-Dimensional Light-in-Flight Imaging" by Kazuhiro Morimoto, Ming-Lo Wu, Andrei Ardelean, Edoardo Charbon presents XYZT capture of light propagation.
"Advances in high speed imaging techniques have opened new possibilities for capturing ultrafast phenomena such as light propagation in air or through media. Capturing light-in-flight in 3-dimensional xyt-space has been reported based on various types of imaging systems, whereas reconstruction of light-in-flight information in the fourth dimension z has been a challenge. We demonstrate the first 4-dimensional light-in-flight imaging based on the observation of a superluminal motion captured by a new time-gated megapixel single-photon avalanche diode camera. A high resolution light-in-flight video is generated with no laser scanning, camera translation, interpolation, nor dark noise subtraction. A machine learning technique is applied to analyze the measured spatio-temporal data set. A theoretical formula is introduced to perform least-square regression, and extra-dimensional information is recovered without prior knowledge. The algorithm relies on the mathematical formulation equivalent to the superluminal motion in astrophysics, which is scaled by a factor of a quadrillionth. The reconstructed light-in-flight trajectory shows a good agreement with the actual geometry of the light path. Our approach could potentially provide novel functionalities to high speed imaging applications such as non-line-of-sight imaging and time-resolved optical tomography."
"Advances in high speed imaging techniques have opened new possibilities for capturing ultrafast phenomena such as light propagation in air or through media. Capturing light-in-flight in 3-dimensional xyt-space has been reported based on various types of imaging systems, whereas reconstruction of light-in-flight information in the fourth dimension z has been a challenge. We demonstrate the first 4-dimensional light-in-flight imaging based on the observation of a superluminal motion captured by a new time-gated megapixel single-photon avalanche diode camera. A high resolution light-in-flight video is generated with no laser scanning, camera translation, interpolation, nor dark noise subtraction. A machine learning technique is applied to analyze the measured spatio-temporal data set. A theoretical formula is introduced to perform least-square regression, and extra-dimensional information is recovered without prior knowledge. The algorithm relies on the mathematical formulation equivalent to the superluminal motion in astrophysics, which is scaled by a factor of a quadrillionth. The reconstructed light-in-flight trajectory shows a good agreement with the actual geometry of the light path. Our approach could potentially provide novel functionalities to high speed imaging applications such as non-line-of-sight imaging and time-resolved optical tomography."
↧
Espros on ToF Illuminator Importance
Espros July 202 Newsletter talks about importance of ToF light emitter:
Due to high illumination power, significant heat generation by the illumination warms up not only the illuminator, but the whole camera. Thus, good thermal management is key. Heat dissipation is required to keep the illumination as cold as possible.
It is to note, that the illumination power decreases significantly with higher temperature. The radiance of the LED in Figure 1 drops by 20% from room temperature to 100°C (junction) which reduces the operating range of the TOF camera at hight temperature.
It is also to note, that the rise and fall time of LEDs is current dependent, shown in Figure 2. The lower the current, the longer the rise and fall time. A variation in rise or fall time generates a significant distance shift. In the example shown in Figure 2, the change of rise/fall time is approx. 18ns between
a current of 100 and 3000mA. Without extra calibration and compensation, a distance shift of 2.7m can be observed! This is really significant.
Rules of thumb:
Due to high illumination power, significant heat generation by the illumination warms up not only the illuminator, but the whole camera. Thus, good thermal management is key. Heat dissipation is required to keep the illumination as cold as possible.
It is to note, that the illumination power decreases significantly with higher temperature. The radiance of the LED in Figure 1 drops by 20% from room temperature to 100°C (junction) which reduces the operating range of the TOF camera at hight temperature.
It is also to note, that the rise and fall time of LEDs is current dependent, shown in Figure 2. The lower the current, the longer the rise and fall time. A variation in rise or fall time generates a significant distance shift. In the example shown in Figure 2, the change of rise/fall time is approx. 18ns between
a current of 100 and 3000mA. Without extra calibration and compensation, a distance shift of 2.7m can be observed! This is really significant.
Rules of thumb:
- A good thermal management of the illumination is key.
- When operating the camera with different LED currents, a separate calibration with at least offset compensation is required.
- Constant illumination power during the whole measurement cycle is key.
- Make sure that the illumination covers the required field of view, but not more.
- The modulation waveform is not important because 4th order harmonics or other effects are calibrated and compensated during runtime.
↧
EETimes: CIS Business Not Affected by Coronavirus
EETimes reporter Junko Yoshida publishes an interview with Yole Developpement analysts "Covid Economy: How Damaged Are We?" The CIS business still goes strong in spite of pandemy:
↧
↧
Jim Janesick's Work at SRI
SRI publishes an article about Jim Janesick's recent works on image sensors for space astronomy:
"Janesick, senior principal research scientist at SRI’s Advanced Imaging lab, has been with the institute for 20 years and before that was at NASA’s famed Jet Propulsion Laboratory (JPL) for 22.
Janesick is the designer of SRI’s CMOS spaceborne imagers onboard the European Space Agency’s (ESA) Solar Orbiter launched in 2020, and NASA’s Parker Solar Probe launched in 2018, missions that orbit the sun to study solar physics. Janesick notes that “after many years of advanced development, SRI’s CMOS imagers were awarded a TRL6 rating,” referring to the Technology Readiness Level (TRL) scale of 1 to 9 that NASA uses. “Once the team was at TRL6 along with successful ground-based prototype demonstrations, NASA gave the green light to use SRI’s CMOS imager in an instrument called the Solar and Heliospheric Imager, or SoloHI. This automatically gave the same rating to the Wide-Field Imager for Parker Solar Probe (WISPR) instrument since both missions use the same CMOS imager.”
NASA and ESA selected SRI’s imager because they were designed and fabricated to withstand the sun’s harsh radiation environment over several years at close range. As such, the spacecrafts are capable of capturing the closest images of the sun.
As the Parker Probe and Solar Orbiter proceeds with its missions, Janesick continues his as well. These days, he is most excited about two upcoming SRI missions; the Europa Clipper spacecraft, scheduled for a 2024 launch and the Geostationary Operational Environmental Satellite (GOES)-U also scheduled for 2024. GOES will fly a solar instrument called Compact Coronagraph (CCOR) and the Europa Clipper will fly a Jupiter-oriented instrument named Europa Imaging System (EIS). GOES will use the same CMOS imager as the SoloHi imager. The Europa spacecraft will have the first large-scale flight approved CMOS imager ever flown (2k x 4k pixels). “We do an extensive testing and selection process in finding several perfect flight candidates, and we’re at that stage now for Europa,” Janesick states."
Jim Janesick is known to the broad cycles of image sensor designers for writing a book on Photon Transfer Curve (PTC), one of the most important characterization tools today. He received Exceptional Lifetime Achievement Award from International Image Sensor Society in 2019.
"Janesick, senior principal research scientist at SRI’s Advanced Imaging lab, has been with the institute for 20 years and before that was at NASA’s famed Jet Propulsion Laboratory (JPL) for 22.
Janesick is the designer of SRI’s CMOS spaceborne imagers onboard the European Space Agency’s (ESA) Solar Orbiter launched in 2020, and NASA’s Parker Solar Probe launched in 2018, missions that orbit the sun to study solar physics. Janesick notes that “after many years of advanced development, SRI’s CMOS imagers were awarded a TRL6 rating,” referring to the Technology Readiness Level (TRL) scale of 1 to 9 that NASA uses. “Once the team was at TRL6 along with successful ground-based prototype demonstrations, NASA gave the green light to use SRI’s CMOS imager in an instrument called the Solar and Heliospheric Imager, or SoloHI. This automatically gave the same rating to the Wide-Field Imager for Parker Solar Probe (WISPR) instrument since both missions use the same CMOS imager.”
NASA and ESA selected SRI’s imager because they were designed and fabricated to withstand the sun’s harsh radiation environment over several years at close range. As such, the spacecrafts are capable of capturing the closest images of the sun.
As the Parker Probe and Solar Orbiter proceeds with its missions, Janesick continues his as well. These days, he is most excited about two upcoming SRI missions; the Europa Clipper spacecraft, scheduled for a 2024 launch and the Geostationary Operational Environmental Satellite (GOES)-U also scheduled for 2024. GOES will fly a solar instrument called Compact Coronagraph (CCOR) and the Europa Clipper will fly a Jupiter-oriented instrument named Europa Imaging System (EIS). GOES will use the same CMOS imager as the SoloHi imager. The Europa spacecraft will have the first large-scale flight approved CMOS imager ever flown (2k x 4k pixels). “We do an extensive testing and selection process in finding several perfect flight candidates, and we’re at that stage now for Europa,” Janesick states."
Jim Janesick is known to the broad cycles of image sensor designers for writing a book on Photon Transfer Curve (PTC), one of the most important characterization tools today. He received Exceptional Lifetime Achievement Award from International Image Sensor Society in 2019.
↧
Past, Present, and Future of Face Recognition
A preprint paper "Past, Present, and Future of Face Recognition: A Review" by Insaf Adjabi, Abdeldjalil Ouahabi, Amir Benzaoui, and Abdelmalik Taleb-Ahmed from University of Bouira, Algeria, University of Tours, France, and University of Valenciennes, France, presents the challenges for the facial recognition algorithms:
"Face recognition is one of the most active research fields of computer vision and pattern recognition, with many practical and commercial applications including identification, access control, forensics, and human-computer interactions. Significant methods, algorithms, approaches, and databases have been proposed over recent years to study constrained and unconstrained face recognition. 2D approaches reached some degree of maturity and reported very high rates of recognition. This performance is achieved in controlled environments where the acquisition parameters are controlled, such as lighting, angle of view, and distance between the camera-subject. However, if the ambient conditions (e.g., lighting) or the facial appearance (e.g., pose or facial expression) change, this performance will degrade dramatically. 3D approaches were proposed as an alternative solution to the problems mentioned above. The advantage of 3D data lies in its invariance to pose and lighting conditions, which has enhanced recognition systems efficiency. 3D data, however, is somewhat sensitive to changes in facial expressions. This review presents the history of face recognition technology, the current state-of-the-art methodologies, and future directions."
"Face recognition is one of the most active research fields of computer vision and pattern recognition, with many practical and commercial applications including identification, access control, forensics, and human-computer interactions. Significant methods, algorithms, approaches, and databases have been proposed over recent years to study constrained and unconstrained face recognition. 2D approaches reached some degree of maturity and reported very high rates of recognition. This performance is achieved in controlled environments where the acquisition parameters are controlled, such as lighting, angle of view, and distance between the camera-subject. However, if the ambient conditions (e.g., lighting) or the facial appearance (e.g., pose or facial expression) change, this performance will degrade dramatically. 3D approaches were proposed as an alternative solution to the problems mentioned above. The advantage of 3D data lies in its invariance to pose and lighting conditions, which has enhanced recognition systems efficiency. 3D data, however, is somewhat sensitive to changes in facial expressions. This review presents the history of face recognition technology, the current state-of-the-art methodologies, and future directions."
↧
LiDAR News: Leddartech, Xaos, IDTechEx, Velodyne, Uber
Leddartech's Frantz Saintellemy, President and COO, features in a podcast:
China-based Xaos Sensor presents its MEMS-based LiDARs priced at $200:
IDTechEx publishes a nice video with review of different LiDAR approaches on the market:
Forbes contributor Sabbir Rangwala publishes his analysis of Velodyne merger with GRAF and going public:
Bloomberg reports that Uber conciders a guilty plea by its former LiDAR engineer Anthony Levandowski is proof that he’s a liar, and supports its decision to make Levandowski alone shoulder a $180M legal award Google won against him.
He agreed to plead guilty to Google-Waymo LiDAR trade secret theft and was driven into bankruptcy when Google won a contract-breach arbitration case against him. Levandowski was counting on Uber’s promise when it first hired him to provide legal cover, known as indemnification, from his former employer.
Uber now says it has no obligation to reimburse Levandowski for the $180M.
China-based Xaos Sensor presents its MEMS-based LiDARs priced at $200:
IDTechEx publishes a nice video with review of different LiDAR approaches on the market:
Forbes contributor Sabbir Rangwala publishes his analysis of Velodyne merger with GRAF and going public:
- Velodyne's valuation post-deal close has grown from ~$1.8B (Velodyne was estimated at a valuation of $1.6B before the merger announcement) to ~$3B as of July 21, 2020
- The ASP per unit drops from $7K in 2019 to $600 in 2024 – which is ~ a 10X reduction, and therefore a 10X increase in volumes shipped
- The growth in market share and unit volumes is based on a transition from the 360° FOV LiDAR products (Surround LiDAR which Velodyne has traditionally dominated) to the Vela series of products in which there is significant competition, and Velodyne is just starting to develop
- Finally, profitability and cash flow – they currently lose about $50M/year, break even by 2023 as the Vela products kick in
- The above analysis indicates that a part of their growth will need to come from acquisitions
Bloomberg reports that Uber conciders a guilty plea by its former LiDAR engineer Anthony Levandowski is proof that he’s a liar, and supports its decision to make Levandowski alone shoulder a $180M legal award Google won against him.
He agreed to plead guilty to Google-Waymo LiDAR trade secret theft and was driven into bankruptcy when Google won a contract-breach arbitration case against him. Levandowski was counting on Uber’s promise when it first hired him to provide legal cover, known as indemnification, from his former employer.
Uber now says it has no obligation to reimburse Levandowski for the $180M.
↧