Computational sensing and embedded low-power processing

Computational sensing is where optics, sensing, and processing are combined to maximize the collection of information for a given application. Computational sensing systems provide real-time information capture of multi-modality sensors, special purpose optical paths and processing to extract signal information.  

CVT has developed several computational sensing technologies and systems for government and commercial clients.   

Embedded low-power processing

Model airplane with adult and kid (text: SRI's low power processors integrated in todays' robotic platforms)

CVT has vast experience in developing low-power AI edge processing solutions for a range of platforms and applications.  On DARPA Hyper-Dimensional Data Enabled Neural Networks (HyDDENN), CVT demonstrated non- Multiply Accumulate (MAC) quantized neural networks with 100X reduction in power-latency factor, performing real-time neural network reconfiguration, using a variety of commercial off the shelf (COTS) field programmable gate arrays (FPGAs).  Building on HyDDENN, CVT is now developing our NeuroEdge technology to implement low-power edge computing on COTS graphics processing unit (GPU) FPGA and AI-Processors using our newly developed NeuroEdge Software Development Kit (SDK). For Advanced Research Projects Agency – Energy (ARPA-E) we developed a battery powered, multi-year operational occupancy sensor systems for household and commercial buildings to reduce building HVAC power consumption. 

SmartVision

CVT developed SmartVision technology to improve a sensor’s ability to collect the most salient information in a large, cluttered scene and share this data with end-users over narrow-bandwidth data links. SmartVision dynamically optimizes single or multi-modality sensing parameters (e.g., integration time, frame rate, wavelength) in local regions of interest across the scene to gather the best mission information. 

smart-vision
YouTube player

Multi-sensor fusion and visualization 

CVT has also developed very low-power and small-sized sensing systems that combine multi-modal, multi-aperture and multi-exposure sensor information into a single display or video feed. Real-time, very low-latency fusion and alignment of wearable or flown sensors are presented to users or machine processing to respond rapidly to changing information. Through computer vision and optimized small-footprint machine learning-based algorithms, the presentation of information to the user in a single display greatly reduces their workload. 

Recent work

Recent publications

more +

Featured reports and publications