Quantcast
Channel: Image Sensors World
Viewing all 6858 articles
Browse latest View live

Jenoptik Acquires Trioptics

$
0
0
JENOPTIK AG acquires TRIOPTICS GmbH known for its active alignment and lens testing technology. Both parties to the contract have agreed not to disclose details of the purchase price.


IniVation announces Event-Based Sensor Eye Tracker

$
0
0
iniVation introduces the Foveator eye tracking technology. Foveator uses AI-enabled neuromorphic technology to follow your eye movements at fast speed, with high accuracy and near-zero latency. Working like a tiny version of retina and visual system, Foveator powers tracking up to 1 kHz with latency below 3 ms.
Foveator technology enables next-generation VR and AR experiences, including:
  • Foveated rendering
    Better graphics and huge improvements in battery life
  • Foveated streaming
    Save >50% of bandwidth across 4G/5G networks
  • Foveated graphics transport
    Reduce graphics bandwidth needs
  • Ultra-low-power human interaction
    Lower speeds for lightweight, all-day AR battery life
Foveator is powered by the iniVation neuromorphic Dynamic Vision Platform.

First Camera Based on Sony InGaAs Stacked SWIR Sensor

LiDAR News: Waymo, Velodyne, Luminar, Aeye, Innoviz, Livox, Aurora

$
0
0
EETimes publishes an article about Waymo Laser Bear Honeycomb LiDAR:

"It’s been well over 16 months since Waymo announced a plan to license its lidar, called Laser Bear Honeycomb, to non-automotive companies.

Waymo is promising that its perimeter lidars, placed at four points around a vehicle, offer “an unparalleled field of view including up to 95 ° vertical field of view, and up to 360 ° horizontal field of view.”

This translates into fewer sensors for AVs to see more area. Waymo also claims that its lidars suffer little interference regardless of proximity; they are able to detect and avoid objects at very close range.

Leading AV companies — Waymo, GM Cruise and Argo AI — have either already acquired lidar technology companies or have developed lidars internally. Even Mobileye, an Intel company, is crafting its own lidar tech, Amnon Shashua, Mobileye’s CEO acknowledged, in a recent interview with EE Times.

Most industry analysts agree that many of the 70-plus lidar startups that have sprung up in the past several years are unlikely to survive in the Covid-19 economy. The public health crisis exacerbates the reality that the arrival of commercial AVs is no longer as imminent as once predicted.
"


TheVerge: A number of US-based LiDAR companies uses federal Paycheck Protection Program:
  • Velodyne, the top LIDAR manufacturer in the US, received a loan in the range of $5M to $10M to retain 450 jobs.
  • Luminar, an Orlando-based company that is making LIDAR laser sensors for Volvo, Toyota, and other automakers working on autonomous vehicles, got a loan between $5M and $10M to retain 341 jobs.
  • Aeye got a loan in the $2M to $5M range to save 85 jobs.

Innoviz announces samples availability of its antomotive qualifies Innoviz One product:




SystemPlus Consulting publishes a reverse engineering of Hamamatsu edge-emitting laser diode and a photodiode inside Livox Horizon LiDAR:

"LiDARS are manufactured around four main components: the pulsed laser diode, avalanche photodiodes, opto-mechanical system (to scan the environment in front of the car), and the processor.

System Plus Consulting proposes an analysis of the pulsed laser and the photodiode in the Horizon LiDAR from Livox: a Chinese company that sells a LiDAR system for automotive ADAS.
The LiDAR sensing module includes a custom six-photodiode array die from Hamamatsu, specifically developed for this LiDAR application. The design is particularly optimized to increase the sensibility of the six avalanche photodiodes. The photodiode dies are assembled in a package with a 905nm narrow bandpass filter.

This LiDAR uses six edge-emitting lasers designed to have three epitaxially stacked emitters. The six laser dies are assembled horizontally with an inclined mirror to send the light perpendicular. Thermal management is performed by a sophisticated substrate.
"


Aurora announces its FirstLight Lidar based on Blackmore acquired technology and intended for Aurora’s next-generation test vehicles:

In-Pixel Temperature Sensors with an Accuracy of ±0.25 °C

$
0
0
Delft University of Technology and Harvest Imaging publish MDPI paper "In-Pixel Temperature Sensors with an Accuracy of ±0.25 °C, a 3σ Variation of ±0.7 °C in the Spatial Domain and a 3σ Variation of ±1 °C in the Temporal Domain" by Accel Abarca, and Albert Theuwissen.

"This article presents in-pixel (of a CMOS image sensor (CIS)) temperature sensors with improved accuracy in the spatial and the temporal domain. The goal of the temperature sensors is to be used to compensate for dark (current) fixed pattern noise (FPN) during the exposure of the CIS. The temperature sensors are based on substrate parasitic bipolar junction transistor (BJT) and on the nMOS source follower of the pixel. The accuracy of these temperature sensors has been improved in the analog domain by using dynamic element matching (DEM), a temperature independent bias current based on a bandgap reference (BGR) with a temperature independent resistor, correlated double sampling (CDS), and a full BGR bias of the gain amplifier. The accuracy of the bipolar based temperature sensor has been improved to a level of ±0.25 °C, a 3σ variation of ±0.7 °C in the spatial domain, and a 3σ variation of ±1 °C in the temporal domain. In the case of the nMOS based temperature sensor, an accuracy of ±0.45 °C, 3σ variation of ±0.95 °C in the spatial domain, and ±1.4 °C in the temporal domain have been acquired. The temperature range is between −40 °C and 100 °C."

Image Algorithmics on RGBW Color Filter Misconceptions

$
0
0
Image Algorithmics kindly sent me a presentation with the company view on RGBW CFA advantages:

"There is a strong preconception in the market that RGBW does not work well. This is understandable given the failure of previous attempts. What's worse, many engineers now believe that it fundamentally cannot work well. I am attaching a slide deck to address this misconception.

We have tested our algorithms on 0.8u, 1.0u, 1.12u and 2.8u RGBW sensors. RGBW has a 6dB+ SNR advantage over Bayer in low light, read noise limited conditions and 3dB+ SNR advantage over Bayer in bright light, shot noise limited conditions. RGBW also has a 6dB dynamic range advantage.
"

Will Samsung and Hynix Close the Gap with Sony?

$
0
0
SK Hynix publishes an article "Accelerated Multi-Camera Competition in Smartphone Market: Will Mobile-Centric CIS Demand Continue to Grow?" written by Yuak Pak, analyst at KIWOOM Securities. The analyst presents his view on the competition with Sony, among other things:

"For CIS, more demand for 12-inch wafers is being seen as focus shifts from 8-inch wafers. In addition, as the number of pixels increases to more than 40 million, the process started to move from 90nm to 32nm or less.

In particular, the manufacturing process of CIS is very similar to that of DRAMs, and the trench technology of DRAMs is applied to the process of high-pixel products. Therefore, it is highly likely that DRAM manufacturers will have a cost advantage over time.

SK hynix is actually applying DRAM trench process technology to eliminate light interference between pixels, while several experiments are underway to prevent the absorption of photons when using metal partition walls. In addition, ISOCELL of Samsung Electronics has also adopted DRAM process technology and is making efforts to refine their process to a 32nm level.

SK hynix, and Samsung Electronics, two of the world’s leading semiconductor memory manufacturers, are expected to close the technological gap with leading CIS players such as Sony thanks to their superior DRAM technology.

SK hynix is currently operating 8- and 12-inch CIS lines. This year’s capacity of the 12-inch CIS line has increased by more than 60% compared to last year. In addition, since some lines are being redeployed for CIS, actual performance is expected to become more visible from the end of this year.

Samsung Electronics is also expanding the capacity of its 12-inch line in addition to its existing 8-inch CIS line, mainly by utilizing old DRAM lines. Starting with 11 lines in 2018, the company is planning to convert 13 lines into a CIS line in 2020.

The industry’s number one CIS manufacturer Sony continues to increase its 12-inch capacity in Japan, which is expected to intensify the competition for market share from the second half of this year. While Sony has to construct a new line, SK hynix and Samsung Electronics are converting and deploying existing DRAM lines, which would make their products highly competitive at cost, advantaging them in competition to secure market share.
"

The article is complemented by the CIS market data from KIWOOM Securities:


"As of 2018, the share of demand for CIS by major industries shows the mobile industry dominating at an unbeatable 68% – followed by compute (9%), consumer (8%), security (6%), automotive (5%), and industrial (4%). In the future, this demand is expected to grow mainly within the mobile, automotive, and industrial sectors, while the market’s size – which was valued around USD 13.7 billion in 2018 – is expected to increase to USD 19 billion by 2022.

In addition to this, packaging technology integrating CIS, ISP, and DRAM is now being introduced to ultra-high-speed cameras, which is proving to be a beneficial change for companies producing both DRAMs and CIS in the mid to long term.
"

Toshiba Announces 200m-range Flash LiDAR Prototype

$
0
0
Toshiba announces a high-resolution, long-range technology for flash LiDAR, probably the next version of one presented at ISSCC 2020. At its heart is Toshiba’s compact, high-efficiency silicon photo-multiplier (SiPM).

In general, SiPM are suitable for long-range measurement as they are highly light sensitive. However, the light-receiving cells composed on SiPM require recovery time after being triggered, and in strong ambient light condition they also need a large number of cells, since they must have reserve cells to react to reflected laser light.

Toshiba’s SiPM applies a transistor circuit that reboots the cells to reduce the recovery time. The cells function more efficiently and fewer are needed, securing a smaller SiPM, as shown in Figure 1. This realizes a higher-resolution SiPM array while maintaining high sensitivity, as shown in Figures 2 and 3.


Field trials with a LiDAR prototype, shown in Figure 4, using commercially available lenses, from wide-angle to telephoto lenses, have demonstrated the system’s effectiveness over a maximum distance of 200m (Figure 5).



Assorted News: Kospet, Corephotonics, LG Innotek, Toshiba

$
0
0
Kospet Prime smartwatch released earlier this year features dual 5MP+2MP camera with software upscaling to 8MP:


GizmoChina quotes a rumor that the next generation Kospet Prime 2 smartwatch will have a 3D camera. "The new sensor will improve the smartwatch’s camera support and will aid in measuring the size of any object more accurately."

TheElec updates on Samsung-Corephotonics patent infringement lawsuit against Apple camera module supplier LG Innotek:

"At their latest hearing on Friday at Seoul Central District Court, when asked by the judge on whether they have an outside appraiser to suggest, a LG InnoTek representative said they had someone in mind from overseas but not in South Korea and said they will look for one. LG also said that the patent invalidation judgment was expected at the end of August and asked the next hearing to be set in September.

The trial has been effectively put on hold for the past eight months due to the issue of choosing an appraiser to evaluate the patent in question.

Corephotonics filed the lawsuit back in November, 2018, alleging that LG InnoTek violated its patents related to small field of view lens assembly. LG countered with its own request to South Korea’s patent office to invalidate the patents in question on June, 2019.
"

Toshiba proposes a 3D camera that builds the depth map based on a different blur for objects closer and farther than its focus plane:

"Until now, it was considered theoretically too difficult to measure distance based on the shape of the blur, which is the same for objects both near and far when they are equidistant from the focal point (Figure 3). However, analytical results revealed a substantial difference between the blur shapes of near and far objects, even when equidistant from the focal point (Figure 4). With that, Toshiba successfully analyzed blur data from captured images by a deep learning module trained with a deep neural network model.

When light passes through a lens, the shape of the blur created is known to change depending on the light's wavelength and its position in the lens. In the developed network, position and color data are processed separately to properly perceive changes in blur shape, and then, after passing through a weighted attention mechanism, are used to control where on the brightness gradient to focus in order to correctly measure the distance (Figure 5). Through learning, the network is then updated to reduce any error between the measured distance and actual distance.

Using this AI module, Toshiba has confirmed that a single image captured with a commercially available camera realizes the same distance measurement accuracy secured with stereo cameras.

Toshiba will confirm the versatility of the system with commercially available cameras and lenses and speed up the image processing, aiming for public implementation in fiscal year 2020.
"

NHK Develops 3-Layer Organic Sensor

$
0
0
NHK has developed a three-layer color image sensor using organic films that detect only blue and only green light, layered vertically over a CMOS image sensor that detects red light.

"Incident light passes the first organic layer, which absorbs only the blue light component and converts to an electrical signal, and is transparent to the green and red components. The second organic layer absorbs only the green component, and the red component is detected by the CMOS image sensor. The organic layers are combined with transparent thin-film transistors, and the signals output from each of the layers can be combined to reproduce a color image.

This structure enables all color information of red, green and blue to be obtained within a single pixel, achieving a high-resolution image sensor that uses light more efficiently. We will continue to work reducing the pixel size and increasing the number of pixels, and accelerate R&D toward realizing a compact, high-resolution, single-chip camera.
"

Cameras and LiDARs in ADAS/AD Systems

$
0
0
ResearchInChina publishes its summary of ADAS/AD approaches of different car manufactures. Some of them rely mostly on cameras and radars, while others use many LiDARs:

Fujifilm Develops Multispectral Camera Based on Polarization-Sensing CIS

$
0
0
Fujifilm develops a new multispectral camera based on polarization-sensing image sensor:

  • High-performance multispectral camera system is equipped with a lens fitted with newly-developed filters, a polarization image sensor that can capture specific directional polarization image, and a cutting-edge image processing function. This system can simultaneously record images of different wavelength ranges in high definition and presenting them in real time.
  • The newly-developed filters serve as “polarizer” that lets light in a specific direction of polarization pass through as well as “optical bandpass filter” that passes light of a specific wavelength range. The system uses three filters to split light into up to nine wavelength bands, while also polarizing the light of each wavelength band into a specific oscillation direction. (Figure 1)
  • The polarization information of light in each wavelength band that has passed through the filters is recorded by the polarization image sensor and applied with the cutting-edge image processing function for visual presentation in high resolution and at a high frame rate (Figure 2). The system also allows users to choose an optical bandpass filter of the optimum wavelength band for their monitoring object.


Once we are at polarization sensing devices, OSA Image of the Week shows a nice visualization of mechanical stresses in plastic cutlery:

Holst Centre Non-spoofable Biometric ID Sensor can be Integrated in Smartphones

$
0
0
Researchers at Holst Centre have combined the organic NIR PD with an oxide thin-film transistor backplane and a focusing lens to create an NIR image sensor measuring 2.4 x 3.6 cm, large enough to image the palm of a hand or multiple fingers at a distance. It's 500-ppi resolution is state-of-the-art for biometric image sensors, enabling high-quality images of the vein pattern. In addition, the sensor achieves external QE (EQE) of 40% at 940 nm and a dark current of around 10e-6 mA/cm2.

"Together with a NIR light source, the prototype image sensor opens the door to contactless biometric security through vein pattern detection. Our thin-film technologies make for extremely thin and potentially flexible sensors that could be easily integrated into existing displays and things like mobile phones or cash machine screens, eliminating the need for separate ID and credit cards," says Daniel Tordera, Senior Scientist at Holst Centre.

Having demonstrated the potential of large-area NIR sensors for vein detection, Holst Centre is continuing to refine the technology and push its sensitivity deeper into the NIR region. With PDs efficient up to 1100 nm, these latest developments could enable new applications such as eye tracking, quality control in food production, condition monitoring of pipes and non-invasive in-body medical imaging including large area oxygen saturation (SpO2) measurements, conformable optical brain scans and cuffless blood pressure monitoring.

Trinamix Beam Profile Analysis for 3D Imaging and Material Detection

$
0
0
trinamiX introduces a novel technology called Beam Profile Analysis to measure distance and, simultaneously, obtain material features from projected laser spots. At the core of the technology is a new class of algorithms, which provides features derived from the analysis of the two-dimensional intensity distribution of each projected spot. These features correlate with distance and material properties and can be further processed by machine-learning approaches on mobile, embedded or PC type platforms.. A Beam Profile Analysis module can be built from components available at scale and consists of a standard CMOS camera and a dot projector.

"Beam Profile Analysis uses that spot shape using physically inspired features to directly measure distance and material information. Among the most important physical properties are the specifics of diffuse scattering (Lambertian scattering, volume scattering), laser speckles and lens convolutional properties (for example, several kinds of aberrations). In other words, Beam Profile Analysis consists of a recipe for specific periodic laser projection grids and a collection of specifically derived filter kernels and functions thereof to extract both distance and material classes."

LiDAR News: Benewake, Ouster, Quanergy, Ibeo, Conti

$
0
0
Benewake LiDAR is used to check toilet occupancy in an airport:

"In order to save time for passengers to use the toilet and reduce congestion in public toilets, some time ago, "Urumchi Diwopu International Airport" in China adopted Benewake LiDAR (TF-Luna). The use of TF-Luna can detect the toilet traffic and remaining squatting space. Both data can be displayed on the screen outside the toilet. This system solution not only relieves the congestion of public toilets, but also saves the time for users to select toilets, and plays a significant role in improving the utilization rate of public toilets and passenger satisfaction."


EETimes reporter Junko Yoshida publishes an article about Ouster LiDAR internals:

"In an interview with EE Times last week, Ouster’s founder and CEO Angus Pacala boasted that his company has already picked up 700 design wins over 15 different industries in 50 countries.

Impressive, but where’s Outster’s advantage?

Pacala said, “We chose technology designed to work in many markets.” Ouster has developed a lidar platform built on “all-CMOS semiconductors.” That makes Ouster’s products “digital lidars,” according to Pacala.

Ouster’s competitors, including Velodyne and Waymo, deploy hundreds of off-the-shelf discrete components to make their spinning lidars work. In contrast, Ouster has developed tightly integrated custom vertical cavity surface emitting lasers (VCSELs) and another ASIC that incorporates single photon avalanche diodes (SPADs) arrays.

Ouster’s platform also includes Xilinx’s FPGA, responsible for processing massive amount of data.
"


Quanergy unveils the MQ-8 3D LiDAR and perception software which are part of Quanergy’s Flow Management platform. Designed with a new smart beam configuration, the MQ-8 solution delivers up to 140 m continuous tracking range, enabling up to 15,000 m2 coverage with a single sensor for flow management applications like security, smart city, social distancing and smart space industries.


EPIC Online Technology Meeting on ADAS and Autonomous Driving has Ibeo and Continental LiDARs presentations:


Yole on Coronavirus Impact on CIS Market

Sensor with AI-Controlled Per-Pixel Exposure

$
0
0
Stanford University, University of Manchester, and IBM Research in Zurich publish a paper "Neural Sensors: Learning Pixel Exposures for HDR Imaging and Video Compressive Sensing With Programmable Sensors" by Julien N.P. Martel, Lorenz K. Mueller, Stephen J. Carey, Piotr Dudek, and Gordon Wetzstein.

"Camera sensors rely on global or rolling shutter functions to expose an image. This fixed function approach severely limits the sensors’ ability to capture high-dynamic-range (HDR) scenes and resolve high-speed dynamics. Spatially varying pixel exposures have been introduced as a powerful computational photography approach to optically encode irradiance on a sensor and computationally recover additional information of a scene, but existing approaches rely on heuristic coding schemes and bulky spatial light modulators to optically implement these exposure functions. Here, we introduce neural sensors as a methodology to optimize per-pixel shutter functions jointly with a differentiable image processing method, such as a neural network, in an end-to-end fashion. Moreover, we demonstrate how to leverage emerging programmable and re-configurable sensor–processors to implement the optimized exposure functions directly on the sensor. Our system takes specific limitations of the sensor into account to optimize physically feasible optical codes and we demonstrate state-of-the-art performance for HDR and high-speed compressive imaging in simulation and with experimental results."


Thanks to PD for the link!

Interview with Omnivision on Disposable Endoscopy

$
0
0
Yole Developpement publishes an interview with Tehzeeb Gunja, Director of Medical Marketing at OmniVision. Few quotes and slides:

"With more than 500 customers and approximately 600 active projects, OmniVision possesses deep knowledge of the medical industry, and strong connections to all leading ecosystem partners and end customers globally.

Technological advancements will also continue to drive disposable adoption. CMOS imagers continue to shrink, which will allow endoscopes with smaller ODs to be designed using chip-on-tip technology. Wafer-level modules will also support large optical format imagers, thus enabling disposable, 1080p resolution for the larger OD endoscopes used in gastrointestinal and laparoscopic procedures. Additionally, there is a growing trend toward multimodal imaging and diagnosis, where the imager is used to position an ultrasound or OCT probe inside the body.

The extremely small size of newer imagers makes it feasible to be integrated directly into endoscopic tools, allowing direct line-of-sight visualization. Additionally, there is growing interest in a range of applications beyond white-light endoscopy, including the use of ultraviolet and near infrared light for fluorescence, chromo-endoscopy and virtual endoscopy. There are also novel endoscopic applications that are moving toward mainstream adoption, including narrow band imaging, multispectral imaging and light polarized imaging, among others.
"

Blackmagic Announces 80MP 60fps Camera

$
0
0
BusinessWire: Blackmagic Design announces URSA Mini Pro 12K digital film camera with a 12,288 x 6,480 12K Super 35 image sensor, 14 stops of DR and 60 fps frame rate in 12K at 80MP per frame.

"The Blackmagic URSA Mini Pro 12K features a revolutionary new sensor with a native resolution of 12,288 x 6480, which is an incredible 80 megapixels per frame. The Super 35 sensor has a superb 14 stops of dynamic range and a native ISO of 800. The new 12K sensor has equal amounts of red, green and blue pixels and is optimized for images at multiple resolutions. Customers can shoot 12K at 60 fps or use in-sensor scaling to allow 8K or 4K RAW at up to 110 fps without cropping or changing their field of view."

The brand ambassador John Brawley shares his knowledge in cinematographers mailing list:
  • Brand new sensor, 3 years in the making.
  • 79 MP.
  • Native 800 iso.
  • 14 stops (that’s probably a bit conservative, they haven’t been able to properly check it because the models are based on Bayer sensors...:-)
  • It’s not Bayer, but it has a very small pixel pitch of 2.2 microns. (Alexa is 8)
  • Instead of Bayer 2x2 grid of GRBG it has a 6x6 grid. 6G, 6B and 6R plus 18 W pixels.
  • The W are clear or “white” pixels. This overcomes the reduced sensitivity issue of a 2.2 micron pitch.


The sensors are shown in Blackmagic presentation video:


Thanks to PF and others for the pointer!

Smartsens Sees Automotive Sensors as its Future Growth Engine

$
0
0
Smartsens talks about its strategical move to automotive imaging market:

"Autonomous Driving presents both challenges and new opportunities for the CIS industry in China. The current shipments of automotive chips show that the gap between domestic and foreign semiconductor companies remains wide and presents an ongoing challenge for Chinese companies. It is, however, an opportunity for SmartSens.

We believe that SmartSens’ strengths in the field of security system can create an advantage in moving into the automotive industry by providing superior night vision imaging performance combined with other in-vehicle electronics technologies such as LED flicker suppression technology, and PixGain HDR technology, just to name a few. In addition, SmartSens recently acquired Shenzhen-based Allchip Microelectronics, positioning us perfectly in research and development for the next-generation automotive sensor technology.

“In the past, the semiconductor business in China relied heavily on overseas technology and research. With the rise of the local semiconductor development and the maturity of domestic CIS technology in recent years, however, we are seeing a seismic shift towards China and Asia,” said Mr. [James Ouyang, the newly appointed Deputy General Manager at SmartSens.]
"

Viewing all 6858 articles
Browse latest View live