Abstract
In most artificial vision systems, the sensor is placed in the image plane, analogous with the retina in the human eye, while the pupil plays the role of controlling intensity, depth of field or spatial filtering of the image projected on the sensor. However, intriguing capabilities are enabled when the image sensor is placed in the pupil of the optical system and the object is viewed in the Fourier domain. This paper will review optical architectures, target structures and computational methods used in conjunction with pupil imaging for semiconductor metrology in advanced process nodes.
Bio
Mike holds a B.Sc from the University of NSW (1983), a Doctorate in Physics from the Technion (1989) and is a graduate of the Technion Institute of Management (2007). Since completion of his doctorate, he has been employed primarily in the semiconductor industry focusing on process control and metrology, by optical methods. After a 3 year sabbatical in the sustainability sector he has returned to semiconductors and currently holds the position of Senior Director, Strategic Technology of the Optical Metrology Division of KLA-Tencor Israel. He has co-authored around 80 scientific papers and is co-inventor of over 100 patents.
Abstract
We tackle the task of creating a city-scale 3D model by dividing it into manageable size blocks and then combining the smaller models together. The latter stage necessitates registering the models one to the other, which is the focus of this lecture.
Typically, ICP-type algorithm is used, however this does not extend well to more than two models.
We present a novel method for fine registration of urban models, in which initial crude alignment parameters are improved to align all the models together with minimal error in a least squares sense.
Bio
David Arnon leads structure from motion algorithms design at Rafael's image processing department. He has 8 years of experience in computer vision and 15 year of R&D experience. David holds M.Sc. in computer science and B.Sc. in mathematics, both from the Hebrew University.
Abstract
Rolling Shutter (RS) is a camera exposure mode that is common in CMOS sensors due to special requirements of small pixel size and longer exposure times (practically every smartphone sensor use RS). In RS mechanism the exposure of each row of pixels advances in a temporal delay in the order of 8-30 μsec. This delay causes capturing artifacts in cases of object or camera dynamics. For a camera that suffers from strong mechanical vibrations, captured video results in a wobble phenomenon also termed as "jello effect", which is extremely disturbing to the viewer's eye.
Image processing correction of RS distortion is a challenging task due to the short spatial/ temporal correlations between neighboring pixels/frames, especially in strong dynamics scenarios. In this work we propose a stabilization procedure that models rolling shutter via dense set of projective transformations across frame rows. Estimated distortion is used to re-render the video as though all the pixels in the frame were taken at the same time. As a post-processing stage, temporal information is used to further stabilize the frames sequence.
Suggested algorithm is highly efficient, robust to foreground motion and produces stabilization level that significantly outperforms state-of-the-art RS stabilization methods.
Bio
Udy Danino holds B.Sc and M.Sc in electrical engineering from Tel-Aviv University and MBA from the Technion. He joined Applied Materials in 2005 and since then held positions of increasing responsibility within algorithm development department. In his last role, he managed Mask Inspection and Defect Review algorithms groups. In 2012 he founded SAIPS - a boutique R&D house that specializes in development of top notch customized algorithmic and software solutions in the fields of image & video processing, computer vision, machine learning and signal processing. Since 2007 he serves as a lecturer in the school of electrical engineering in Tel-Aviv University.
Abstract
In the first part of the talk we will provide an overview of the industrial internet (also known as the “Internet of Things (IOT)”) and present some of the challenges in this domain such as big data analytics, predictive analytics, large scale asset management and machine vision problems in advanced manufacturing. In the second part of the talk we will show how mathematics (Approximation Theory and Harmonic Analysis) can be combined with Machine Learning to solve some of these challenges. Tools such as the tree-based Random Forest and the Gradient Boosting Machine are popular and powerful machine learning algorithms that are also employed as part of 'Deep Learning' systems. Constructing the right form of wavelet decomposition of these tools allows establishing ordering of their forest decision nodes: from `significant' features to `less significant' to `insignificant' noise. Consequently, simple wavelet techniques can be used to overcome the presence of noise and misclassifications in the training sets and compress large scale neural networks.
Bio
Shai serves as a principal scientist in GE Global Research and is a visiting associate professor at the school of mathematics in Tel-Aviv University, Israel. His research interests are theoretical approximation theory, harmonic analysis and their applications in data science.
Abstract
Images are 2D signals, and should be processed as such – this is the common belief in the image processing community. Is it truly the case? Around thirty years ago, some researchers suggested converting images into 1D signals, so as to harness well-developed 1D tools such as adaptive-filtering and Kalman-estimation techniques. These attempts resulted with poorly performing algorithms, strengthening the above belief. Why should we force unnatural causality between spatially ordered pixels? Indeed, why?
In this talk I will present a conversion of images into 1D signals that leads to state-of-the-art results in series of applications – denoising, inpainting, compression, and more. The core idea in our work is that there exists a permutation of the image pixels that carries in it most of the "spatial content", and this ordering is within reach, even if the image is corrupted. We expose this permutation and use it in order to process the image as if it is a one-dimensional signal, treating successfully a series of image processing problems.
Bio
Michael Elad received his B.Sc. (1986), M.Sc. (1988) and D.Sc. (1997) from the department of Electrical engineering at the Technion, Israel. Since 2003 he is a faculty member at the Computer-Science department at the Technion, and since 2010 he holds a full-professorship position.
Michael Elad works in the field of signal and image processing, specializing in particular on inverse problems, sparse representations and super-resolution. Michael received the Technion's best lecturer award six times, he is the recipient of the 2007 Solomon Simon Mani award for excellence in teaching, the 2008 Henri Taub Prize for academic excellence, and the 2010 Hershel-Rich prize for innovation. Michael is an IEEE Fellow since 2012. He is serving as an associate editor for SIAM SIIMS, IEEE-TIT, and ACHA. Michael is also serving as a senior editor for IEEE SPL.
Abstract
In this talk I will present RingIt – a novel technique to sort an unorganized set of casual photographs taken along a general ring, where the cameras capture a dynamic event in the center of the ring. The multitude of cameras constantly present nowadays redefined not only the meaning of capturing an event, but also the meaning of sharing this event with others. The images are frequently uploaded to some common platform, like Facebook or Picasa, and the image-navigation challenge naturally arises. Our technique recovers the spatial order of a set of still images capturing an event taken by a group of people situated around the event, allowing for a sequential display of the captured object.
Bio
Hadar Averbuch-Elor received a B.Sc. in electrical engineering from the Technion (cum laude) in 2012. She joined Rafael in 2011, and since then has worked as an algorithms developer in the image processing department. She is currently a Ph.D. candidate in Tel Aviv University. Her research interests include multi-view systems and unsupervised 3D modeling.
Abstract
A novel technology of true, high quality, wide viewing angle, full color and "touchable" digital holography will be presented. The technology is developed by RealView Imaging Ltd. that is introducing the world's first 3D holographic display and interface system initially for medical imaging applications. RealView’s proprietary technology projects hyper-realistic, dynamic 3D holographic images “floating in the air” without the need for any type of eyewear or a conventional 2D screen. The projected 3D volumes appear in free space, allowing the user to literally touch and interact precisely within the image, presenting a unique and proprietary breakthrough in digital holography and real-time 3D interaction capabilities. For more information please visit www.realviewimaging.com. An overview on the company, technology, capabilities and recent accomplishments will be presented.
Bio
Mr. Gelman is a highly skilled R&D and business executive, with vast hands-on experience and inventions in the field of multidisciplinary electro-optical and display technologies. Before founding RealView Imaging, Mr. Gelman worked for Elbit Systems (NASDAQ: ESLT), one of Israel’s largest defense companies, leading large scale R&D programs of hi-tech helmet-mounted display systems for aviation/pilot applications, actively used by leading Air Forces around the world.
Over the years, he has gained substantial system engineering and leadership skills, working on cutting edge projects and leading multiple complex programs and operations. Mr. Gelman earned his Executive MBA (cum laude) from the Haifa University, a B.Sc. (cum laude) in Industrial Engineering (IT) & Management from the Technion, Israel Institute of Technology and is a graduate of the Merage Executive Program in Irvine California.
Abstract
GPU Compute enables the acceleration of data-parallel computation whilst helping reduce CPU load and energy consumption, increasing overall throughput, and enhancing system flexibility and extensibility. In 2012 the ARM Mali-T600TM series of GPUs was first to bring GPU Compute to mobile platforms through support for OpenCL™ Full Profile and RenderScript. Since then, developers have been quick to demonstrate the benefits of this technology: from low-latency gesture control, more responsive subject recognition, increased efficiency of multimedia processing, to the potential of accelerating depth sensing and much more. With the plethora of cameras and sensors available today Computer Vision is a natural and compelling use case for this technology.
In this presentation we will discuss some of the work ARM and our ecosystem partners are doing in this area to harness Mali’s power. We look at some key use case examples and explore the technological challenges and opportunities facing developers.
Bio
Based at ARM’s Cambridge HQ Tim is a graphics and GPGPU engineer working within the Media Processing Group. He specialises in all things compute with a particular focus on Computer Vision within the mobile and embedded space. His role encompasses working with developers new to the Mali GPU, helping spread the word about optimal heterogeneous software design on this exciting platform. Previously Tim worked as a producer for the BBC, leading a research and development team in the use of multimedia in television training. Tim is married with two children and lives in the Chiltern Hills just outside London.
Abstract
Understanding human actions in videos has been a central research theme in computer vision for decades, and much progress has been achieved over the years. Much of this progress was demonstrated on standard benchmarks used to evaluate novel techniques. These benchmarks and their evolution, provide a unique perspective on the growing capabilities of computerized action recognition systems. By examining them, we learn just how far machine vision systems have come in understanding human actions in videos, as well as how much still remains in order to close the gap between existing state-of-the-art performance and the needs of real-world applications. In this talk I will survey these benchmarks -- from early examples, such as the Weizmann set, to contemporary benchmarks -- narrating the story of the development of machine action recognition techniques in videos, from its early days to the challenges that still remain.
Bio
Tal Hassner received the M.Sc. and Ph.D. degrees in applied mathematics and computer science from the Weizmann Institute of Science in 2002 and 2006, respectively. He later completed a postdoctoral fellowship, also at the Weizmann institute. In 2006, he joined the faculty of the Department of Mathematics and Computer Science, The Open University of Israel, where he currently holds a Senior Lecturer position (Assistant Professor). In addition, working with industrial partners as a consultant, he participates in the development of commercial computer vision and machine learning technologies. His research interests are in applications of machine learning in pattern recognition and computer vision.
Bio
Sam Hasinoff is a software engineer at Google. Before joining Google in August 2011, he was an Research Assistant Professor at the Toyota Technological Institute at Chicago (TTIC), a philanthropically endowed academic institute on the campus of the University of Chicago. From 2008-2010, he was a postdoctoral fellow at the Massachusetts Institute of Technology, supported in part by the National Sciences and Engineering Research Council of Canada. He received the BSc degree in computer science from the University of British Columbia in 2000, and the MSc and PhD degrees in computer science from the University of Toronto in 2002 and 2008, respectively. In 2006, he received an honorable mention for the Longuet-Higgins Best Paper Award at the European Conference on Computer Vision. He is the recipient of the Alain Fournier Award for the top Canadian dissertation in computer graphics in 2008.
Abstract
Super resolution (SR) algorithms typically assume that the blur kernel is known (either the Point Spread Function 'PSF' of the camera, or some default low-pass filter like a Gaussian). However, the performance of SR methods significantly deteriorates when the assumed blur kernel deviates from the true one. We propose a general framework for "blind" super-resolution. In particular, we show that:
- Unlike the common belief, the PSF of the camera is the WRONG blur kernel to use in SR algorithms.
- We show how the correct SR blur kernel can be recovered directly from the low-resolution image. This is done by exploiting the inherent fractal-like recurrence property of small natural image patches. In particular, we show that recurrence of small patches across scales of the low-res image (which forms the basis for single-image SR), can also be used for estimating the optimal blur kernel.
This leads to significant improvement in SR results.
* Joint work with Tomer Michaeli.
Bio
Michal Irani is a Professor at the Weizmann Institute of Science, in the Department of Computer Science and Applied Mathematics. She received her PhD in Computer Science from the Hebrew University. During 1993-1996 she was a member of the Vision Technologies Laboratory at the Sarnoff Research Center (Princeton). She joined the Weizmann Institute in 1997. Michal's research interests center around computer vision, image processing, and video information analysis. Michal's prizes and honors include the David Sarnoff Research Center Technical Achievement Award (1994), the Yigal Alon three-year Fellowship for Outstanding Young Scientists (1998), and the Morris L. Levinson Prize in Mathematics (2003). She received the ECCV Best-Paper Award in 2000 and in 2002, and was awarded the Honorable Mention for the Marr Prize in 2001 and in 2005.
Abstract
Classification of defect images without human intervention has been challenging for years. The objective of Automatic Defect Classification (ADC) techniques is not only to detect the existence of defects in SEM image, but to classify them automatically by type, in order to provide more detailed feedback on the production process. The classification challenges are originating from the small training data and its high dimensionality. In addition, the continuously changing environment in the fab, comprising new and changing defect types encountered during production phases, requires constant human monitoring. We introduce a new approach which maintains high purity automatic classification. The results show that our classification model attains high classification performance, coupled with lower complexity, and generalization power.
Bio
Idan Kaizerman is the manager of machine vision algorithms development group in Applied Materials Process Diagnostics and Control business unit. During his six years at Applied Materials, Idan, who holds a M.Sc. in Electrical Engineering from Ben-Gurion University, has filled a number of key roles in the development of various detection and classification algorithms for wafer inspection and defect review tools. Idan is now managing the group responsible for development of computer vision algorithms combining state of the art procedures for image processing and machine learning.
Abstract
We present several research topics in medical imaging and related to them computer vision algorithms developed in IBM Research Haifa. We will describe feature extraction methods specific to medical tasks, and will present a newly developed general method for figure-ground segmentation with application to natural and medical images. The method combines a bottom-up approach of generating multiple candidate segmentations, and a top-down model for re-ranking these segmentations. It yields competitive results on a number of challenging datasets. We present examples of application of the proposed segmentation method to natural images and to difficult medical image cases.
Bio
Dr. Pavel Kisilev is a lead research scientist in the Multimedia Department at the IBM Research Haifa. Pavel joined IBM Research Haifa in 2011. Dr. Pavel Kisilev graduated from the Technion in 2002 with a Ph.D in Electrical Engineering. In 2003-2011 he was a Senior Research Scientist in Hewlett-Packard Laboratories, Israel. Pavel's research interests include computer vision, statistical learning, image and video understanding and medical imaging. Dr. Kisilev is an author of over 40 patents, of a book chapter, and of over 30 papers in top tier journals and conferences.
Abstract
The main challenge for Xbox One Hand Pose Recognition was to meet both accuracy and classification-time targets simultaneously. Our work is based on learning multiple lookup tables and by introducing a novel approach of trading training-set size for classification speed: Increasing the former enables reduction of the latter. The result classifier achieved an order of magnitude faster and with about four times’ fewer errors than the previous best classifier.
Bio
Eyal Krupka leads computer-vision and machine-learning research at Microsoft Research, Advanced Technology Labs Israel. His recent work has focused on natural user interfaces and depth cameras, delivered to Xbox One. Before joining Microsoft, Krupka worked for Intel Research and was a staff researcher for DSPC. He has 22 years of experience in leading R&D projects in a wide range of fields including: machine learning, computer vision, signal and image processing, digital communications, software and hardware. He has more than 25 patents and has been published in top machine-learning conferences and journals. Krupka received his Ph.D. in computational neuroscience from the Hebrew University, and his B.Sc. (summa cum laude) in electrical engineering from the Technion.
Abstract
We see more and more wearable devices (glasses) being presented to the market, new features and applications are made available for the users but it seems that the interaction method with these devices has not been mastered yet; Touch panels, voice controls and remote controllers connected to these devices are currently used to control the content, but what about a simple point of a finger or swipe of the hand to interact with the information presented in front of you on the glass display?
Join us at this session to learn about gesture recognition technology as a natural user interaction method for wearable devices, hear about the challenges in bringing robust gesture solutions to these devices and how to overcome them.
Bio
Tal joined eyeSight bringing product management experience he gained during his 5 year tenure with Microsoft. As eyeSight’s VP Product Management Tal is dedicated to developing eyeSight’s roadmap, definition and messaging of eyeSight’s products.
Before joining eyeSight Tal served as Product Planner and Product Manager for key Microsoft Office applications (including Word &PowerPoint). Tal gained deep knowledge and understanding of global scale in software products.
Tal holds a L.L.B. from Tel-Aviv University and an MBA from Kellogg School of Management at Northwestern University.
Abstract
At CES 2014, under the RealSense brand, Intel announced the upcoming launch of a 3D camera, the first of its kind to be integrated into laptops and tablets. Intel’s announcement is the latest in a succession of product launches over the past several years – including, notably, Microsoft’s Kinect camera -- that have brought 3D camera technology into the mainstream. Arguably, it may also be the most significant, in terms of making this technology widely accessible and even ubiquitous. Reflecting on the impact of 3D cameras on academic research as well as industry applications, what can we expect when low-cost, low-power 3D cameras are integrated into every computing device?
Bio
Gershom Kutliroff is the Chief Technologist of Intel’s Perceptual Computing Software group. In this role, he is the focal point for the group’s computer vision technologies, working with the internal development groups as well as interfacing with external parties. Previously, Dr. Kutliroff was the CTO and co-founder of Omek Interactive, acquired by Intel in 2013. Before founding Omek, Dr. Kutliroff was the Chief Scientist of IDT Video Technologies, where he led research efforts in developing computer vision applications to support IDT’s videophone operations. He earned his Ph. D. and M. Sc. in Applied Mathematics from Brown University, and his B. Sc. in Applied Mathematics from Columbia University.
Abstract
Array optics and computational cameras pave an alternative path to the common single aperture cameras. This presentation reviews multi-aperture imaging systems in which multiple low-resolution images are combined into a single high-resolution image. It analyzes the resolution tradeoffs and limits of sub-pixel registration and super-resolution processes and proposes different paths for multi-aperture imaging systems. It is worthwhile to mention that there are other approaches for super-resolution and image upscaling, e.g., by transferring information between different image scales, using example database or recurrence of patches within the same image scale or a different image scale. These methods require specific assumptions on the image content, which do not always hold in practice. A particular example of a dual aperture camera with up to 3 times zooming in capability will be discussed in details.
Bio
Prof. David Mendlovic has received his B.Sc. and Ph.D. degrees in Electrical Engineering from Tel Aviv University, Israel. Prof. Mendlovic was joining Tel-Aviv University as a Lecturer in Electrical Engineering, where at present he is a Full Professor of electro-optics. He has authored more than 200 technical articles, 3 book chapters, and is the holder of more than 30 patents all of them have been commercialized.
He is a founder of successful opto-electronics startup companies (e.g. Civcom and Eyesquad) and served as their CEO. Civcom Inc. was acquired by Padtec S/A of Brazil, Eyesquad was acquired by Tessera Inc (NASDAQ symbol: TSRA). Now he leads Corephotonics, a startup that deals with the creation of advanced compact camera technologies.
Prof. Mendlovic is the 1998 winner of the ICO (International Commission of Optics) Award, in recognition of his contribution to the optical signal processing area.
Since January 2008 till September 2010, Prof. Mendlovic was the Chief Scientist of the Israeli Ministry of Science. At present he serves as a co-Chairman of the German – Israeli Foundation (GIF) and Vice Dean for Research and Industrial Relations
Abstract
Automated optical inspection (AOI) printed circuit boards (PCBs), is performed since the early 80’s. The AOI process detects production defects such as cuts and shorts. It is mainly based on optical differentiation between a polymer substrate and a metallic conductor. This involves the optical properties such as reflection, scattering and fluorescence.
Several approaches were exploited for AOI; these included laser scanning and array image sensor. The predominant approach, using line-scan sensors, requires unique illumination conditions (such as angular coverage) for PCB inspection. Fulfilling these requirements was enabled by creative illumination designs, based on lamps, fiber-optics, and the more recent introduction of LEDs.
We shall review the evolvement of AOI systems from an optical perspective. This includes examples of industrial systems, as well as the effects of technological and market trends over the last decades.
Bio
Ram Oron received his Ph.D. from the Weizmann Institute of Science in the field of laser resonators. He was later the founder and CTO of KiloLambda technologies, where he led the development of the optical components and subsystems. In the last 6 years, he is with Orbotech, where he is leading physics and system engineering activities in the development of PCB inspection, repair, and direct imaging machines. Dr. Oron authored more than 20 refereed papers; he is an inventor of more than 10 US patents, and a senior member of the SPIE (international society for optics and photonics).
Abstract
What will be the next disruption? Review of VC investment trends and a glance to a few of the upcoming and emerging startups in computer vision and video recognition industry.
Bio
Amir Pinchas, Principal, Microsoft Ventures Fund- Responsible for Fund investments in EMEA.
Driving sourcing, applicant reviews, due diligence and investments. Actively engaged with Microsoft Ventures portfolio companies in advisory capacity on core product marketing, business, go-to-market and financing strategies.
Prior to this role Amir served in finance and strategy functions in companies like Microsoft, Intel and Numonyx. He has vast experience in working with startups and was the co-founder of the first Microsoft accelerator. He has track record in new business evaluation, strategic consulting, business development and Mergers and acquisitions.
Abstract
Rolling Shutter (RS) is a camera exposure mode that is common in CMOS sensors due to special requirements of small pixel size and longer exposure times (practically every smartphone sensor use RS). In RS mechanism the exposure of each row of pixels advances in a temporal delay in the order of 8-30 μsec. This delay causes capturing artifacts in cases of object or camera dynamics. For a camera that suffers from strong mechanical vibrations, captured video results in a wobble phenomenon also termed as "jello effect", which is extremely disturbing to the viewer's eye.
Image processing correction of RS distortion is a challenging task due to the short spatial/ temporal correlations between neighboring pixels/frames, especially in strong dynamics scenarios. In this work we propose a stabilization procedure that models rolling shutter via dense set of projective transformations across frame rows. Estimated distortion is used to re-render the video as though all the pixels in the frame were taken at the same time. As a post-processing stage, temporal information is used to further stabilize the frames sequence.
Suggested algorithm is highly efficient, robust to foreground motion and produces stabilization level that significantly outperforms state-of-the-art RS stabilization methods.
Bio
Nadav Raichman holds a Ph.D. in physics and a M.Sc. in electrical engineering from Tel-Aviv University. He joined IAI in 2009, and works in TAMAM division in developing electro-optical stabilized payloads. Nadav is the head of TAMAM's video processing team and head of TAMAM's video tracker development team.
Abstract
The Medical Image Computing literature traditionally favors fully-automated analysis algorithms that offer the potential high throughput, objective, and reproducible results for large data collections. However fully-automated techniques cannot handle many time-critical tasks, or tasks that require contextual knowledge not readily available inthe images alone. Thus, the oversight of an experienced physician remains mandatory.
We present a coherent, active‐contour segmentation method, which supports an intuitive and friendly user interaction subject to the ‘bottom up’ constraints introduced by the image features.
A few mouse‐clicks of the user, located in regions of ‘disagreement’, are represented as a continuous energy term that is incorporated into a level‐set functional.
In our experiments we show that with a minimal user input, the obtained segmentation results become, in practice, identical to manual expert segmentation.
Bio
Dr. Tammy Riklin Raviv is a faculty member at the Electrical and Computer Engineering Department of Ben‐Gurion University since November 2013.
Her research focuses on the on the development of mathematical and algorithmic tools for processing, analysis and understanding of natural, biological and medical images. She holds a B.Sc.
in Physics and a M.Sc. in Computer Science from the Hebrew University of Jerusalem, Israel. She received her Ph.D. from the School of Electrical Engineering of Tel‐Aviv University. In the years 2008‐2013 she was a post‐doctoral associate at the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology (MIT) and a research fellow at Harvard Medical School and at the Broad Institute of MIT and Harvard.
Abstract
Today’s mobile applications use cases show an increasing trend to move from simple image processing towards computer vision oriented algorithms. The major difference between image processing and CV based algorithms is the need to “understand the image” and perform complicated algorithms that are based on knowing what the objects in the scene are. Enabling real time embedded feature extraction and tracking is a key element in enabling CV based applications in embedded solutions like Mobile, Automotive and Surveillance. Performing Real time feature extraction while meeting a strict power consumption budget in an embedded platform is a task that is challenging to solve on existing computing platforms. In this session we will analyze the trade-offs of various feature extraction algorithms and their suitability for embedded vision processing on a programmable processor, and recommend methods for efficient implementation.
Bio
Moshe Shahar serves as Director of System Architecture at CEVA and manages the Computer vision processing platform project in CEVA. He has 15 years of experience in the semiconductor and silicon industry. Prior to this position, Mr. Shahar worked as the System Department Manager and System architect in CEVA. Previously, Mr. Shahar worked at DSP Group, and in National Semiconductor as a system Design engineer and system Architect. Mr. Shahar holds a B.Sc. in Electrical Engineering from Tel Aviv University in Israel.
Bio
Prof. Amnon Shashua holds the Sachs chair in computer science at the Hebrew University of Jerusalem. His field of expertise is computer vision and machine learning. For his academic achievements he received the MARR prize Honorable Mention in 2001, the Kaye innovation award in 2004, and the Landau award in exact sciences in 2005. He is the co-founder of Mobileye, an Israeli company developing systems-on-chip and computer vision algorithms for detecting pedestrians, vehicles, and traffic signs for driving assistance systems. He is the cofounder of OrCam, an Israeli company who recently launched an assistive product for the visually impaired based on advanced computerized visual interpretation capabilities.
Bio
An entrepreneur (face.com) that turned into early stage investor (Any.DO, Joytunes, Commerce Sciences to name a few).
Equal partner together with Michael Eisenberg of Aleph, a new $140 million venture capital fund focused on serving Israeli entrepreneurs who want to build big, scalable global businesses.
Bio
Bruce Tannenbaum works on image processing and computer vision applications in technical marketing at MathWorks. Earlier in his career, he developed computer vision and wavelet-based image compression algorithms at Sarnoff Research Center (now SRI). He holds an MSEE degree from University of Michigan and a BSEE degree from Penn State.
Abstract
Mobile phones have been visually-oriented devices since the appearance of the first camera phones, and today user-produced content has been a driving force behind everything from network utilization to app creation. In this talk, Imagination will explain how Compute APIs such as OpenCL are enabling developers to perform sophisticated manipulation of image data to create a wide range of new user experiences, from computational photography through intelligent vision systems to augmented reality apps plus many others.
Bio
Doug joined Imagination Technologies in June 2012 and has been instrumental in driving the company's efforts behind the adoption and implementation of GPU computing products based on OpenCL and Renderscript. Prior to joining Imagination, Doug was with XMOS from its formative stage, where he recruited and led the software tools team, co-authored the XC programming language, and developed the company's tools product roadmap. Doug holds Bachelor's and Doctorate degrees from the University of Bristol, where his research focused on the design of tools that simplify the programming of heterogeneous systems.
Abstract
An emerging method in the utilization of big data in computer vision is through the set of techniques called Deep Learning. In these techniques, multilayer neural networks are often trained using millions of training samples. The top layers of the hierarchy are used as powerful high level representations of the input signal. In my talk, I will discuss recent advances in this field.
(This is a temporary abstract due to anonymity considerations. A more detailed abstract is expected a month before the conference).
Bio
Prof. Lior Wolf is a faculty member at the School of Computer Science at Tel-Aviv University. Previously, he was a post-doctoral associate in Prof. Poggio's lab at MIT. He graduated from the Hebrew University, Jerusalem, where he worked under the supervision of Prof. Shashua. Lior Wolf was awarded the 2008 Sackler Career Development Chair, the Colton Excellence Fellowship for new faculty (2006-2008), the Max Shlumiuk Award for 2004, and the Rothchild Fellowship for 2004. His joint work with Prof. Shashua in ECCV 2000 received the best paper award, and their work in ICCV 2001 received the Marr Prize honorable mention. He was also awarded the best paper award at the post ICCV 2009 workshop on eHeritage, and the pre-CVPR2013 workshop on action recognition. Prof. Wolf research focuses on computer vision and applications of machine learning and includes topics such as face identification, digital paleography, and video action recognition.