Postingan

Menampilkan postingan dari Mei, 2022

IMAGE BESTSmartSens Goes Public on Shanghai Stock ExchangeAMMJENACIONAL

Gambar
From PRNewsWire: https://www.prnewswire.com/ news-releases/smartsens-goes- public-on-shanghai-stock- exchange-and-sees-shares- surge-on-the-first-trading- day-301552661.html SHANGHAI, May 23, 2022 /PRNewswire/ -- On May 20, 2022, SmartSens was officially listed on the Science and Technology Innovation Board of the Shanghai Stock Exchange (Stock Code: 688213). On the first day of trading, SmartSens shares surged by 79.82%, with a total market value of 22.66 billion yuan. SmartSens Technology (Shanghai) Co., Ltd. (Stock Code: 688213) is a high-performance CMOS image sensor (CIS) chip design company. It is headquartered in Shanghai and has research centers in many cities around the world. SmartSens has been dedicated to pushing forward the frontier of imaging technology and growing in popularity among customers since it was established. SmartSens' CMOS image sensors provide advanced imaging solutions for a broad range of areas such as surveillance, machine vision, automotive and cellp

IMAGE BESTPreprint on unconventional cameras for automotive applicationsAMMJENACIONAL

Gambar
From arXiv.org --- You Li et al. write: Autonomous vehicles rely on perception systems to understand their surroundings for further navigation missions. Cameras are essential for perception systems due to the advantages of object detection and recognition provided by modern computer vision algorithms, comparing to other sensors, such as LiDARs and radars. However, limited by its inherent imaging principle, a standard RGB camera may perform poorly in a variety of adverse scenarios, including but not limited to: low illumination, high contrast, bad weather such as fog/rain/snow, etc. Meanwhile, estimating the 3D information from the 2D image detection is generally more difficult when compared to LiDARs or radars. Several new sensing technologies have emerged in recent years to address the limitations of conventional RGB cameras. In this paper, we review the principles of four novel image sensors: infrared cameras, range-gated cameras, polarization cameras, and event cameras . Their compa

IMAGE BESTSingle Photon Workshop 2022 - Call for PapersAMMJENACIONAL

Gambar
After a short hiatus, Single Photon Workshop will be held in person from October 31 to November 4, 2022 at the Korea Institute of Science and Technology (KIST) in Seoul, South Korea. SPW 2022 is the tenth and latest installment in a series of biennial workshops on single-photon technologies and applications. After one-year delay due to COVID-19, SPW 2022 is intended to bring together again a broad range of people with interests in single-photon sources, single-photon detectors, photonic quantum metrology, and their applications such as quantum information processing. Researchers from universities, industry, and government will present their latest developments in single-photon devices and methods with a view toward improved performance and new application areas. It will be an exciting opportunity for those interested in single-photon technologies to learn about the state-of-the-art and to foster continuing partnerships with others seeking to advance the capabilities of such technologie

IMAGE BESTPreAct and Espros working on new lidar solutionsAMMJENACIONAL

From optics.org news: PreAct Technologies, an Oregon-based developer of near-field flash lidar technology and Espros Photonics, Sargans, Switzerland, a firm producing time-of flight chips and 3D cameras, have announced a collaboration agreement to develop new flash lidar technologies for specific use cases in automotive, trucking, industrial automation and robotics. The collaboration combines the dynamic abilities of PreAct’s software-definable flash lidar and the “ultra-ambient-light-robust time-of-flight technology” from Espros with the aim of creating what the partners call “next-generation near-field sensing solutions”. Paul Drysch, CEO and co-founder of PreAct Technologies, commented, “Our goal is to provide high performance, software-definable sensors to meet the needs of customers across various industries. Looking to the future, vehicles across all industries will be software-defined, and our flash lidar solutions are built to support that infrastructure from the beginning.” Th

IMAGE BESTVeoneer and BMW agreement on next-gen automotive vision systemsAMMJENACIONAL

Veoneer to supply Cameras to BMW Group's Next Generation Vision System for Automated Driving Stockholm, Sweden, May 18, 2022: The automotive technology company Veoneer has signed an agreement to equip BMW Group vehicles with camera heads for their next generation vision system for Automated Driving. The camera heads support the cooperation between BMW Group, Qualcomm Technologies, and Arriver™ which was announced in March this year. The high definition 8 MP camera, mounted behind the rear-view mirror, monitors the forward path of the vehicle to provide reliable and accurate information to the vehicle control system.   In BMW's next generation of Automated Driving Systems, BMW Group's current AD stack is combined with Arriver's Vision Perception and NCAP Drive Policy products on Qualcomm Technologies' system-on-chip, with the goal of designing best-in-class Automated Driving functions spanning NCAP, Level 2 and Level 3. Veoneer's camera heads are adapted to the c

IMAGE BESTams OSRAM VCSELs in Melexis' in-cabin monitoring solutionAMMJENACIONAL

ams OSRAM VCSEL illuminator brings benefits of integrated eye safety to Melexis automotive in-cabin monitoring solution Premstaetten, Austria (11 May, 2022) – ams OSRAM (SIX: AMS), a global leader in optical solutions, announces that it is supplying a high-performance infrared laser flood illuminator for the latest automotive indirect Time-of-Flight (iToF) demonstrator from Melexis. The ams OSRAM vertical-cavity surface-emitting laser (VCSEL) flood illuminator from the TARA2000-AUT family has been chosen for the new, improved version of the EVK75027 iToF sensing kit because it features an integrated eye safety interlock. This provides for a more compact, more reliable and faster system implementation than other VCSEL flood illuminators that require an external photodiode and processing circuitry. The Melexis evaluation kit demonstrates the combined capabilities of the new ams OSRAM 940nm VCSEL flood illuminator in combination with an interface board and a processor board and the MLX750

IMAGE BEST"End-to-end" design of computational camerasAMMJENACIONAL

Gambar
A team from MIT Media Lab has posted a new arXiv preprint titled "Physics vs. Learned Priors: Rethinking Camera and Algorithm Design for Task-Specific Imaging". Abstract : Cameras were originally designed using physics-based heuristics to capture aesthetic images. In recent years, there has been a transformation in camera design from being purely physics-driven to increasingly data-driven and task-specific. In this paper, we present a framework to understand the building blocks of this nascent field of end-to-end design of camera hardware and algorithms. As part of this framework, we show how methods that exploit both physics and data have become prevalent in imaging and computer vision, underscoring a key trend that will continue to dominate the future of task-specific camera design. Finally, we share current barriers to progress in end-to-end design, and hypothesize how these barriers can be overcome. Preprint:  https://arxiv.org/pdf/2204.09871.pdf

IMAGE BESTAdvanced Navigation Acquires Vai Photonics AMMJENACIONAL

Advanced Navigation, one of the world’s most ambitious innovators in AI robotics, and navigation technology has today announced the acquisition of Vai Photonics, a spin-out from The Australian National University (ANU) developing patented photonic sensors for precision navigation.   Vai Photonics share a similar vision to provide technology to drive the autonomy revolution and will join Advanced Navigation to commercialise their research into exciting autonomous and robotic applications across land, air, sea and space. “The technology Vai Photonics is developing will be of huge importance to the emerging autonomy revolution. The synergies, shared vision and collaborative potential we see between Vai Photonics and Advanced Navigation will enable us to be at the absolute forefront of robotic and autonomy driven technologies,” said Xavier Orr, CEO and co-founder of Advanced Navigation.  “Photonic technology will be critical to the overall success, safety and reliability of these new syste

IMAGE BESTProf. Eric Fossum's interview at LDV vision summit 2018AMMJENACIONAL

Gambar
Eric Fossum & Evan Nisselson Discussing The Evolution, Present & Future of Image Sensors Eric Fossum is the inventor of the CMOS image sensor “camera-on-a-chip” used in billions of cameras, from smartphones to web cameras to pill cameras and many other applications. He is a solid-state image sensor device physicist and engineer, and his career has included academic and government research, and entrepreneurial leadership. He is currently a Professor with the Thayer School of Engineering at Dartmouth in Hanover, New Hampshire where he teaches, performs research on the Quanta Image Sensor (QIS), and directs the School’s Ph.D. Innovation Program. Eric and Evan discussed the evolution of image sensors, challenges and future opportunities.     More about LDV vision summit 2022: https://www.ldv.co/visionsummit Organized by LDV Capital https://www.ldv.co/   [An earlier version of this post incorrectly mentioned this interview is from the 2022 summit. This was in fact from 2018. --AI]

IMAGE BESTPhotonics magazine article on Pi Imaging SPAD arrayAMMJENACIONAL

Gambar
Photonics magazine has a new article about Pi Imaging Technology's high resolution SPAD sensor array; some excerpts below. As the performance capabilities and sophistication of these detectors have expanded, so too have their value and impact in applications ranging from astronomy to the life sciences. As their name implies, single-photon avalanche diodes (SPADs) detect single particles of light, and they do so with picosecond precision. Single-pixel SPADs have found wide use in astronomy, flow cytometry, fluorescence lifetime imaging microscopy (FLIM), particle sizing, quantum computing, quantum key distribution, and single- molecule detection. Over the last 10 years, however, SPAD technology has evolved through the use of standard complementary metal-oxide-semiconductor (CMOS) technology. This paved the way for arrays and image sensor architectures that could increase the number of SPAD pixels in a compact and scalable way.  Compared to single-pixel SPADs, arrays offer improved

IMAGE BESTNewsight CMOS ToF sensor releaseAMMJENACIONAL

Gambar
NESS ZIONA, Israel, Feb. 14, 2022 /PRNewswire/ -- Newsight Imaging - a leading semiconductor innovator developing machine vision sensors, spectral vision chips, and systems - announced today the upcoming release of the NSI9000 one-chip (non-stacked) CMOS image sensor solution for depth imaging. The new chip is equipped with 491,520 depth 5x5 micron pixels (1024x480) (almost 5X more than its closest competitor), global shutter (with up to 132 fps on full resolution), and an estimated depth accuracy of less than 1% of the distance. The sensor is designed for an optimal distance of 0-200 meters. The new chip offers new capabilities at a competitive price to significant growth markets for LiDAR systems, automotive ADAS, Metaverse AR/VR applications, Industry 4.0, and smart city/IOT, including smart traffic 3D vision systems. The sensor is a result of five years of collaborative innovation by Newsight and its partners such as Fraunhofer and Tower-Jazz. The product offers unique features, in

IMAGE BESTApple iPhone LiDAR applicationsAMMJENACIONAL

Polycam makes apps that leverage the new lidar sensor on Apple's latest iPhone and iPad models. Their website presents a gallery of objects scanned with their app:  https://poly.cam/explore We are beyond excited to announce our biggest update EVER to our LiDAR scanning pipeline 💥 We’ve fused LiDAR and Photo data to produce high quality LiDAR Objects so now you can get the best of both worlds Make sure to share and tag us on social, and as always, happy scanning 🤳 pic.twitter.com/DK59bsLP2E — polycam (@Polycam3D) April 20, 2022 Original press release about Polycam's new app: Polycam launches a 3D scanning app for the new iPhone 12 Pro models with a LiDAR sensor. The app allows users to rapidly create high quality, color 3D scans that can be used for 3D visualization and more. Because the scans are dimensionally accurate, they can be used to take measurements of virtually anything in the scan at once, rapidly speeding up workflows for many professionals such as architects and

IMAGE BESTDotphoton and Hamamatsu partnering on raw image compressionAMMJENACIONAL

From Novus Light news: Dotphoton, an industry-leading raw image compression company and Hamamatsu Photonics, a world leader in optical systems and photonics manufacturing, are pleased to announce their new partnership. Modern microscopy, drug discovery and cell research are among the many applications that rely on the highest quality image data. Hamamatsu, a renowned scientific camera manufacturer, provides the ultimate image quality needed for scientific research and pharmaceutical industry in fields such as light-sheet microscopy, high-throughput screening, and histopathology. In these applications, the generation of large volumes of data leads to low scalability and high costs and complexity of required IT infrastructure. This new partnership enables researchers to capture and preserve higher volumes of quality data, and to make the most of modern processing methods, including AI-based image processing. “In industry and academia, storage budgets grow exponentially every year, the in

IMAGE BESTWill event-cameras dominate computer vision?AMMJENACIONAL

Dr. Ryad Benosman, a professor at University of Pittsburgh believes a huge shift is coming to how we capture and process images in computer vision applications. He predicts that event-based (or, more broadly, neuromorphic) vision sensors are going to dominate in the future. Dr. Benosman will be a keynote speaker at this year's Embedded Vision Summit .  EETimes published an interview with him; some excerpts below. According to Benosman, until the image sensing paradigm is no longer useful, it holds back innovation in alternative technologies. The effect has been prolonged by the development of high–performance processors such as GPUs which delay the need to look for alternative solutions. “Why are we using images for computer vision? That’s the million–dollar question to start with,” he said. “We have no reasons to use images, it’s just because there’s the momentum from history. Before even having cameras, images had momentum.” Benosman argues, image camera–based techniques for com

IMAGE BESTLightweight object detection on the edgeAMMJENACIONAL

Gambar
Edge Impulse announced its new object detection algorithm, dubbed Faster Objects, More Objects (FOMO), targeting extremely power and memory constrained computer vision applications. Some quotes from their blog: FOMO is a ground-breaking algorithm that brings real-time object detection, tracking and counting to microcontrollers for the first time. FOMO is 30x faster than MobileNet SSD and runs in <200K of RAM. To give you an idea, we have seen results around 30 fps on the Arduino Nicla Vision (Cortex-M7 MCU) using 245K RAM.     Since object detection models are making a more complex decision than object classification models they are often larger (in parameters) and require more data to train. This is why we hardly see any of these models running on microcontrollers.   The FOMO model provides a variant in between; a simplified version of object detection that is suitable for many use cases where the position of the objects in the image is needed but when a large or complex model can

IMAGE BESTLow Light Video DenoisingAMMJENACIONAL

Gambar
A team from UC Berkeley and Intel Labs has posted a new pre-print titled "Dancing under the stars: video denoising in starlight". They present a new method for denoising videos captured in extremely low illumination of fractions of a lux. Abstract: Imaging in low light is extremely challenging due to low photon counts. Using sensitive CMOS cameras, it is currently possible to take videos at night under moonlight (0.05-0.3 lux illumination). In this paper, we demonstrate photorealistic video under starlight (no moon present, <0.001 lux) for the first time. To enable this, we develop a GAN-tuned physics-based noise model to more accurately represent camera noise at the lowest light levels. Using this noise model, we train a video denoiser using a combination of simulated noisy video clips and real noisy still images. We capture a 5-10 fps video dataset with significant motion at approximately 0.6-0.7 millilux with no active illumination. Comparing against  alternative method

IMAGE BESTExtreme depth-of-field light field cameraAMMJENACIONAL

Gambar
An article titled "Trilobite-inspired neural nanophotonic light-field camera with extreme depth-of-field" by Q. Fan et al. proposes a metalens design inspired by the bi-focal vision system of an extinct marine arthropod. Abstract A unique bifocal compound eye visual system found in the now extinct trilobite, Dalmanitina socialis, may enable them to be sensitive to the light-field information and simultaneously perceive both close and distant objects in the environment. Here, inspired by the optical structure of their eyes, we demonstrate a nanophotonic light-field camera incorporating a spin-multiplexed bifocal metalens array capable of capturing high-resolution light-field images over a record depth-of-field ranging from centimeter to kilometer scale, simultaneously enabling macro and telephoto modes in a snapshot imaging. By leveraging a multi-scale convolutional neural network-based reconstruction algorithm, optical aberrations induced by the metalens are eliminated, there

IMAGE BEST"Photon counting cameras for quantum imaging applications" Prof. Edoardo CharbonAMMJENACIONAL

Gambar
"Photon counting cameras for quantum imaging applications"  Prof. Edoardo Charbon, Full Professor, Advanced Quantum Architecture Lab Abstract: Photon counting has entered the realm of image sensing with the creation of deep-submicron CMOS SPAD technology. The format of SPAD image sensors has expanded from 8×4 pixels in 2004 to the recent megapixel camera in 2019, and the applications have literally exploded in the last few years, with the introduction of proximity sensing and portable telemeters. SPAD image sensors are today in almost every smartphone and will soon be in every car. The introduction of Quanta Burst Photography has created a great opportunity for photon counting cameras, which are ideally suited for it, given its digital nature and speed; it is however computationally intensive. A solution to this problem is the use of 3D stacking, introduced for SPADs in 2015, where large silicon real estate is now available to host deep-learning processors, neural networks di