29 October 2020
LMU Munich / University of Tromsø
Imperial College London
Tel Aviv University & Imubit
Tel Aviv University
Weizmann Institute of Science
Alibaba DAMO Israel Lab
Mobileye, Intel, Hebrew University
Israeli Innovation Authority
MaxQ-AI and Tel Aviv University
Holon Institute of Technology
IBM Research AI
General Motors R&D Israel
Elbit Systems Aerospace
Alibaba DAMO Israel Lab
SeeTree Systems Inc.
Zebra Medical Vision
Vice President/Distinguished ScientistAmazon
Professor Gérard Medioni received the Diplôme d’Ingenieur from ENST, Paris in 1977, a M.S. and Ph.D. from the University of Southern California in 1980 and 1983 respectively.
He is Vice President/Distinguished Scientist at Amazon, where he is leading the research efforts for Amazon Go, and the recently announced Amazon One service. He is also Professor Emeritus of Computer Science at USC, where he served as Chairman of the Computer Science Department from 2001 to 2007. Professor Medioni has made significant contributions to the field of computer vision. He has published 4 books, over 80 journal papers and 200 conference articles, and is the recipient of more than 45 patents.
He is the editor, with Sven Dickinson, of the Computer Vision series of books for Morgan-Claypool.
Prof. Medioni is on the advisory board of the IEEE Transactions on PAMI Journal, associate editor of the Pattern Recognition and Image Analysis Journal.
He is vice president of the Computer Vision Foundation (CVF).
Prof. Medioni serves at general co-chair of many CVPR Conferences (1997, 2001, 2007, 2009, 2020), ICPR (1998, 2014), WACV (2009, 2011, 2013, 2015, 2017, 2019, 2021), ICCV (2017, 2019).
Prof. Medioni has been a consultant to several companies and startups (DXO, Poseidon, Opti-copy, Geometrix, Symah Vision, KLA-Tencor, PrimeSense) prior to joining Amazon.
He is a Fellow of IAPR, a Fellow of the IEEE, and a Fellow of AAAI.
Gitta Kutyniok currently has a Chair for Mathematical Foundations of Artificial Intelligence at the Ludwig-Maximilians Universität München and an Adjunct Professorship in Machine Learning at the University of Troms\o. She received her Diploma in Mathematics and
Computer Science as well as her Ph.D. degree from the Universität Paderborn in Germany, and her Habilitation in Mathematics in 2006 at the Justus-Liebig Universität Gießen. From 2001 to 2008 she held visiting positions at several US institutions, including Princeton University, Stanford University, Yale University, Georgia Institute
of Technology, and Washington University in St. Louis, and was a Nachdiplomslecturer at ETH Zurich in 2014. In 2008, she became a full professor of mathematics at the Universität Osnabrück, and moved to Berlin three years later, where she hold an Einstein Chair in the Institute of Mathematics at the Technische Universität Berlin and a courtesy appointment in the Department of Computer Science and Engineering until 2020.
She received various awards for her research such as an award from the Universität Paderborn in 2003, the Research Prize of Gießen and a Heisenberg-Fellowship in 2006, the von Kaven Prize by the DFG in 2007, and an Einstein Chair in 2008. She gave the Noether Lecture at the ÖMG-DMV Congress in 2013 and the Hans Schneider ILAS Lecture at IWOTA in 2016, and will hold a plenary talk in 2021 at the European Congress of Mathematics. She also became a member of the Berlin-Brandenburg Academy of Sciences and Humanities in 2017, a SIAM Fellow in 2019, and an IEEE Senior Member in the same year. She was Chair of the SIAM Activity Group on Imaging Sciences from
2018-2019 and is Co-Chair of the first SIAM conference on Mathematics of Data Science taking place this year. She was Scientific Director of the graduate school BIMoS at TU Berlin from 2014 to 2020 and is currently Chair of the GAMM Activity Groups on Mathematical Signal- and Image Processing and Computational and Mathematical
Methods in Data Science. Her main research interests are in the areas of applied harmonic analysis, compressed sensing, high-dimensional data analysis, imaging science, inverse problems, machine learning, numerical mathematics, partial differential
equations, and applications to life sciences and telecommunication.
Deep Learning meets Modeling: Taking the Best out of Both Worlds
Pure model-based approaches are today often insufficient for solving complex inverse problems in medical imaging. At the same time, we witness the tremendous success of data-based methodologies, in particular, deep neural networks for such problems. However, pure deep learning approaches often neglect known and valuable information from the modeling world, are prone to instabilities and are not interpretable.
In this talk, we will develop a conceptual approach by combining the model-based method of sparse regularization by shearlets with the data-driven method of deep learning. Our solvers are guided by a microlocal analysis viewpoint to pay particular attention to the singularity structures of the data. Finally, focussing on the inverse problem of (limited-angle) computed tomography, we will show that our algorithms significantly outperform previous methodologies, including methods entirely based on deep learning.
Tal Arbel is a Professor in the Department of Electrical and Computer Engineering, where she is the Director of the Probabilistic Vision Group and Medical Imaging Lab in the Centre for Intelligent Machines, McGill University. She is a Canada CIFAR AI Chair, MILA (Montreal Institute for Learning Algorithms) and an Associate Member of the Goodman Cancer Research Centre. Prof. Arbel’s research focuses on development of probabilistic machine learning methods in computer vision and medical image analysis, with a wide range of real-world applications in neurology and neurosurgery. For example, the machine learning algorithms developed by her team for Multiple Sclerosis (MS) lesion detection and segmentation have been used in the clinical trial analysis of most of the new MS drugs currently used worldwide. She is a recipient of the 2019 McGill Engineering Christophe Pierre Research Award. She regularly serves on the organizing team of major international conferences in both fields (e.g. MICCAI, MIDL, ICCV, CVPR). She was an Associate Editor for IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), and Computer Vision and Image Understanding (CVIU) and is now the Editor-in-Chief of a newly launched arXiv overlay journal: Machine Learning for Biomedical Imaging (MELBA).
Modelling and Propagating Uncertainties in Machine Learning for Medical Images of Patients with Neurological Diseases
Although deep learning (DL) models have been shown to outperform other frameworks for a variety of medical contexts, inference in the presence of pathology in medical images presents challenges to popular networks. Errors in deterministic outputs lead to distrust by clinicians and hinders the adoption of DL methods in the clinic. Moreover, given that medical image analysis typically requires a sequence of inference tasks to be performed, this results in an accumulation of errors over the sequence of outputs. This talk will describe recent work exploring (MC-dropout) measures of uncertainty in tumour detection and segmentation models in patient images and illustrate how propagating uncertainties across cascaded medical imaging tasks (e.g. MR image synthesis) can improve DL inference. The models have been successfully applied to the MICCAI BRaTs brain tumour segmentation challenge dataset, and they were also used to devise metrics for ranking competing methods in the new BRaTs sub-challenge on Quantifying Uncertainties in Brain Tumour Segmentation.
Michael Bronstein is a professor at Imperial College London, where he holds the Chair in Machine Learning and Pattern Recognition, and Head of Graph Learning Research at Twitter. He also heads ML research in Project CETI, a TED Audacious Prize winning collaboration aimed at understanding the communication of sperm whales. Michael received his PhD from the Technion in 2007. He has held visiting appointments at Stanford, MIT, Harvard, and Tel Aviv University, and has also been affiliated with three Institutes for Advanced Study (at TU Munich as a Rudolf Diesel Fellow (2017-2019), at Harvard as a Radcliffe fellow (2017-2018), and at Princeton as visitor (2020)). Michael is the recipient of five ERC grants, Member of the Academia Europaea, Fellow of IEEE, IAPR, and ELLIS, ACM Distinguished Speaker, and World Economic Forum Young Scientist. In addition to his academic career, Michael is a serial entrepreneur and founder of multiple startup companies, including Novafora, Invision (acquired by Intel in 2012), Videocites, and Fabula AI (acquired by Twitter in 2019). He has previously served as Principal Engineer at Intel Perceptual Computing and was one of the key developers of the Intel RealSense technology.
Geometric Deep Learning: Past, Present, And Future
Geometric deep learning has recently become one of the hottest topics in machine learning, with its particular instance, graph neural networks, being used in a broad spectrum of applications ranging from 3D computer vision and graphics to high energy physics and drug design. Despite the promise and a series of success stories of geometric deep learning methods, we have not witnessed so far anything close to the smashing success convolutional networks have had in computer vision. In this talk, I will outline my views on the possible reasons and how the field could progress in the next few years.
Asst. Professor of Computer Science at Tel Aviv UniversityChief Scientist at ImubitTel Aviv University & Imubit
Nadav Cohen is an Asst. Professor of Computer Science at Tel Aviv University, and Chief Scientist at Imubit. His academic research revolves around the theoretical and algorithmic foundations of deep learning, while at Imubit he leads the development of deep learning systems controlling industrial manufacturing lines. Nadav earned a BSc in electrical engineering and a BSc in mathematics (both summa cum laude) at the Technion Excellence Program for Distinguished Undergraduates. He obtained his PhD (direct track, summa cum laude) at the Hebrew University, and was subsequently a postdoctoral scholar at the Institute for Advanced Study in Princeton. For his contributions to deep learning, Nadav won a number of awards, including the Google Doctoral Fellowship in Machine Learning, the Final Prize for Machine Learning Research, the Rothschild Postdoctoral Fellowship, the Zuckerman Postdoctoral Fellowship, and TheMarker's 40 under 40 list.
Practical Implications of Theoretical Deep Learning
Deep learning is experiencing unprecedented success in recent years, delivering state of the art performance in numerous application domains. However, despite its extreme popularity and the vast attention it is receiving, this technology suffers from various limitations --- in terms of stability, reliability, explainability and more --- hindering its proliferation. In this talk, I will argue that theoretical analyses of deep learning may assist in addressing such limitations, by providing principled tools for neural architecture and optimization algorithm design. Two examples will be given: (i) application of tensor analysis and quantum mechanics for configuring the architecture of a convolutional neural network; and (ii) dynamical analysis of gradient descent over linear neural networks for enhancing convergence and generalization properties.
Applied Research LeadFacebook AI
Tal Hassner received his M.Sc. and Ph.D. degrees in applied mathematics and computer science from the Weizmann Institute of Science in 2002 and 2006, respectively. In 2008 he joined the Department of Math. and Computer Science at The Open Univ. of Israel where he was an Associate Professor until 2018. From 2015 to 2018, Tal was a senior computer scientist at the Information Sciences Institute (ISI) and a Visiting Research Associate Professor at the Institute for Robotics and Intelligent Systems, Viterbi School of Engineering, both at USC, CA, USA. From 2018 to 2019, he was a Principal Applied Scientist at AWS where he led the design and development of the latest AWS face recognition pipelines. Since 2019 he is an Applied Research Lead at Facebook AI, supporting both the text and people photo understanding teams.
On Detecting Manipulated Faces
Senior Research ScientistGoogle
Learning to Retime People in Video
In most videos we capture, humans constitute the salient objects in the scene. Thus, developing computational methods for analyzing, visualizing and manipulating people in videos plays an important role in many applications (e.g., Augmented Reality, Robotics). In this talk, I'll highlight a few of my recent works, each touching on a different aspect of people's geometry and motion in ordinary videos. More specifically, (i) a method that transforms an ordinary video of a person moving into a 3D sculpture representing its share and motion. (ii) a deep-learning based model that predicts the geometry (depth) of people in videos in the challenging case where both the camera and the people in the scene are freely moving, and (iii) a neural-rendering based model for retiming--speeding up, slowing down, or entirely freezing--certain people in videos, while automatically re-rendering properly all the scene elements that are related to those people, like shadows, reflections, and loose clothing. All these methods take just an ordinary video as input and learn concepts of natural motion, geometry and scene decomposition, without requiring any manual labels. I'll demonstrate new video effects on various real-world complex videos such as dancing, groups running, or kids jumping on trampolines.
Professor, The Blavatnik School of Computer ScienceTel Aviv University
Holds a BSc in computer science and physics, and a PhD in computational neuroscience. After his PhD, he was a postdoctoral fellow at the University of Toronto and a postdoctoral fellow at MIT. His research interests include machine learning, deep learning, graphical models, optimization, machine vision, and natural language processing. His work has received several prizes including five paper awards at NeurIPS, ICML and UAI. In 2019, he received the ERC Consolidator Grant.
Generating Scene Graphs from Images and Images from Scene Graphs
Scene graphs are detailed semantic descriptions of images. In this talks I will describe methods for annotating images with scene graphs, learning how to annotate from weak supervision, and generating images from scene graphs. In particular, I will discuss questions of representation invariance in these architectures.
Professor Weizmann Institute of Science
Michal Irani is a Professor at the Weizmann Institute of Science, in the Department of CS and Applied Mathematics. She received her PhD from the Hebrew University (1994), and joined the Weizmann Institute in 1997. Her research interests center around Computer-Vision, Image-Processing, AI and Video information analysis. Michal's recent prizes and honors include the Maria Petrou Prize (2016), the Helmholtz “Test of Time Award” (2017), the Landau Prize for Arts & Sciences (2019), and the Rothschild Prize (2020). She also received the ECCV Best Paper Award in 2000 and in 2002, and was awarded the Honorable Mention for the Marr Prize in 2001 and in 2005.
“Deep Internal learning” -- Deep Learning with Zero Examples
I will show how complex visual inference can be performed with Deep-Learning, in a totally unsupervised way, by training on a single image -- the test image itself. The strong recurrence of information inside a single image provides powerful internal examples, which suffice for self-supervision of CNNs, without any prior examples or training data. This gives rise to true “Zero-Shot Learning”. I will show the power of this approach to a variety of problems, including super-resolution, segmentation, transparency separation, dehazing, image-retargeting, and more.
I will further show how self-supervision can be used for “Mind-Reading” (reconstructing images from fMRI brain recordings), despite having only little training data.
Associate DirectorAlibaba DAMO Israel Lab
Matan is leading the eXtended Reality (XR) efforts in Alibaba DAMO Israel Lab. He was previously the CTO and co - founder of Infinity Augmented Reality, which developed AR glasses and was acquired by Alibaba in 2019. He has been working in various computer vision fields for over 15 years. Matan holds a PhD (direct program) in Computer Science from the Technion (2010) and is an alumni of Talpiot program.
From Product to Research and Back (in Alibaba DAMO Machine Intelligence Israel Lab)
One of the challenges facing researchers is having their research make it into a product and make business impact. They have to “shop” their research in the organization, to find the right partner who needs exactly the research they have done.
In this talk, we will discuss how we alleviate this issue in Alibaba’s DAMO Israel Machine Intelligence lab. We will show three specific examples: 1. How we can deploy best-in-class networks to Alibaba’s mobile apps, keeping them lightweight and efficient through state-of-the-art pruning and network architecture search; 2. How we handle image classification in the real world, with best-in-class multi-class labeling; 3. How we can provide real-world image editing - “object deletion” with a best-performing inpainting algorithm that is especially suited to the task at hand.
These three are examples of how we increase the impact of our research, by first identifying the fundamental technologies required for the products we are involved in, understanding their requirements, and then executing world-leading research to provide a business edge.
Chief Technology Officer, MobileyeSenior Fellow, Intel CorporationProfessor at the Rachel and Selim Benin School of Computer Science and Engineering at the Hebrew University of JerusalemMobileye, Intel, Hebrew University
Shai Shalev-Shwartz is the CTO of Mobileye, a Senior Fellow at Intel.
Professor Shalev-Shwartz holds a professor position in the Rachel and Selim Benin School of Computer Science and Engineering at the Hebrew University of Jerusalem. Before joining Hebrew University, Prof. Shalev-Shwartz was a research assistant professor at Toyota Technological Institute in Chicago, as well as having worked at Google and IBM research. Prof. Shalev-Shwartz is the author of the book “Online Learning and Online Convex Optimization,” and a co-author of the book “Understanding Machine Learning: From Theory to Algorithms.” Prof. Shalev-Shwartz has written more than 100 research papers, focusing on machine learning, online prediction, optimization techniques, and practical algorithms.
On the Challenges of Building a Camera-only, Complete, Self-Driving System
Humans can drive a car using a vision-only system, without relying on 3D sensors at all, and achieve a remarkable high accuracy. Can we match this ability using computer vision? The talk will focus on some of the challenges, including machine learning with extremely high accuracy, lifting a 2D projection back to the 3D world, and developing decision-making algorithms that are robust to sensing errors.
VP of AlgorithmsHealthy.io
Rise of The 3D Medical Selfie
Engineering Group Manager - PerceptionGeneral Motors
Bat El is leading the perception group at General Motors Israel Technical Center. The group develops advanced algorithms for Automated Driving capabilities to future products at GM. Bat El background includes hands on and leadership positions in computer vision algorithms and software development. She graduated from the ATUDA program, where she served in the elite technological unit of the intelligence corps. She Holds a Msc in electrical engineering from TAU and Bsc in computer and electrical engineering from the Technion.
AV Perception Productization
Perception algorithms is a well studied and active research field. Despite numerous research breakthroughs in the field and industry high focus, productization of perception systems remains a challenge. This talk will discuss the complexity of creating perception algorithms for real products. This includes advanced sensor fusion technics, uncertainty creation, algorithm generalization to different sensor types and meeting aggressive compute constraints while keeping high accuracy. Additionally, since perception is a data hungry system, we will discuss how improvement over time is achieved as data becomes available.
Investment DirectorApplied Materials
Michael joined Applied Ventures in 2015 after working for more than 12 years in advanced technology development at Applied Materials and Intel Research. Michael’s investment areas include AI/ML hardware and infrastructure, Silicon photonics, embedded memory technology and semiconductor materials. His most recent investment was with portfolio company Syntiant where he serves on the board as Director. Prior to joining Applied Ventures, Dr. Stewart was co-founder of JUSE LLC, a consumer electronics focused startup, and the inventor of the low cost CRAFT Cell for silicon photovoltaics. Dr Stewart holds a Ph.D. in Chemistry from Purdue University and an MBA from the University of California at Berkeley (Haas School of Business), and is an inventor on over 40 US and world patents and author of 30 peer reviewed publications.
Growing New Companies in Intelligent Edge Technology: Hitting a Fast-Moving Target
More than eight years have passed since the “ImageNet moment” spurred a fundamental shift in the direction of information technology, followed by distinct waves of startups founded to commercialize the use of CNNs and other ANNs in practical settings.
While the first waves focused on their novel engineering capabilities and technical depth, the current wave is fully engaged in growing businesses based on Intelligent Edge technologies, that face a wide range of competitors from public and other startup companies.
Development in the underlying hardware technology enabling large advances in energy efficiency, accuracy, and latency of compute operating at the edge has begun to accelerate along with the emergence of the market, with key innovations in embedded memory, sensor technology, and the integration workflow of creating novel Intelligent Edge systems. In this talk we will cover some of the challenges that different categories of startups commercializing Intelligent Edge are facing, and how partnerships that leverage the depth of capabilities present in the semiconductor industry can be employed to differentiate and advantage startups in the space.
Yuval is a postdoctoral researcher working with Prof. Tomer Michaeli at the Technion. His research focuses on the intersection of computer vision and audio processing with Machine learning. He completed his PhD at the Weizmann Institute of Science, where his advisor was Prof. Michal Irani. Previously, he completed his M.Sc. at the Technion, where he was advised by Prof. Yoav Y. Schechner.
Explorable Image Restoration
Single image super resolution (SR) has seen major performance leaps in recent years. However, existing methods do not allow exploring the infinitely many plausible reconstructions that might have given rise to the observed low-resolution (LR) image. These different explanations to the LR image may dramatically vary in their textures and fine details, and may often encode completely different semantic information. In this work, we introduce the task of explorable super resolution. We propose a framework comprising a graphical user interface with a neural network backend, allowing editing the SR output so as to explore the abundance of plausible HR explanations to the LR input. At the heart of our method is a novel module that can wrap any existing SR network, analytically guaranteeing that its SR outputs would precisely match the LR input, when downsampled. Besides its importance in our setting, this module is guaranteed to decrease the reconstruction error of any SR network it wraps, and can be used to cope with blur kernels that are different from the one the network was trained for. We illustrate our approach in a variety of use cases, ranging from medical imaging and forensics, to graphics.
VP and Head of the Division - Technology InfrastructureIsraeli Innovation Authority
Dr. Aviv Zeevi Balasiano is a VP and head of the division -Technology Infrastructure in the Israeli innovation authority. Until two years ago, Dr. Balasiano served as the head of the ICT department in the Israeli Directorate for EU FP – A government agency aims at promoting joint Israeli-EU R&D ventures within the EU’s R&D Framework Program. He has a PhD in Information Systems from Tel Aviv University. His research field involves Estimating the value of information of R&D. Aviv has also taken part in an international research definition of the productivity of ICT in the Era of Cyberspace, Internet, Open Information and Shared Knowledge in cooperation with Stevens Institute of Technology. He holds degrees in Economics and Political Science.
Dr. Balasiano has served for 5 years as an Artillery Officer in the IDF and has received General IDF Commander's honor followed by 16 years in the IT industry mainly in software development and simulation.
AI Infrastructure - National Need
In order to perform complex calculations in the field of artificial intelligence, a great deal of computational power is required, which is also able to handle information in very large volumes. In fact, the need for artificial intelligence computing and the need to solve increasingly complex computing problems are pushing the computing market forward consistently, including moving to GPU processing units and designing new components that will be specifically tailored for artificial intelligence computing. For Israel
The ability to stay up-to-date and relevant is required, along with the ability to research and innovate independently. It is important to emphasize that the needs are not only in the power of calculation itself, but also in storage, communication, support and more..
When we come to define the infrastructure needs for artificial intelligence uses, we need to ask two questions: The first is who our users are. The second is what their needs are regarding the following topics: access to shared information, cost savings, ability to solve large-scale problems, classification constraints, confidentiality and security, performing innovative hardware and software testing, community support, education and training.
Leah Bar holds B.Sc. in Physics, M.Sc. in Bio-Medical Engineering and PhD in Electrical Engineering from Tel-Aviv University.
She worked as a post-doctoral fellow in the Department of Electrical Engineering at the University of Minnesota.
She is currently a senior researcher at MaxQ-AI, a medical AI start-up, and in addition a researcher at the Mathematics Department in Tel-Aviv University.
Her research interest are: machine learning, image processing, computer vision and variational methods.
PDE-Based Tomography and Inverse Problems Solver by Unsupervised Learning
We introduce a novel neural network-based partial differential equations solver for forward and inverse problems. The solver is grid free, mesh free and shape free, and the solution is approximated by a neural network.
We employ an unsupervised approach such that the input to the network is a points set in an arbitrary domain, and the output is the set of the corresponding function values. The network is trained to minimize deviations of the learned function from the PDE solution and satisfy the boundary conditions.
The resulting solution in turn is an explicit smooth differentiable function with a known analytical form.
Unlike other numerical methods such as finite differences and finite elements, the derivatives of the desired function can be analytically calculated to any order. This framework therefore, enables the solution of high order non-linear PDEs. The proposed algorithm is a unified formulation of both forward and inverse problems where the optimized loss function consists of few elements: fidelity terms of L2 and L infinity norms, boundary and initial conditions constraints, and additional regularizers. This setting is flexible in the sense that regularizers can be tailored to specific problems. We demonstrate our method on several free shape 2D second order systems with application to Electrical Impedance Tomography (EIT).
Deep learning Research Engineer, Deci.aiWeizmann Institute of Science
KernelGAN - Blind Super Resolution Kernel Estimation
Super-resolution (SR) methods typically assume that the low-resolution (LR) image was downscaled from the unknown high-resolution (HR) image by a fixed 'ideal' downscaling kernel (e.g. Bicubic downscaling). However, this is rarely the case in real LR images, in contrast to synthetically generated SR datasets. When the assumed downscaling kernel deviates from the true one, the performance of SR methods significantly deteriorates. This gave rise to Blind-SR - namely, SR when the downscaling kernel ("SR-kernel") is unknown. It was further shown that the true SR-kernel is the one that maximizes the recurrence of patches across scales of the LR image. In this paper we show how this powerful cross-scale recurrence property can be realized using Deep Internal Learning. We introduce "KernelGAN", an image-specific Internal-GAN, which trains solely on the LR test image at test time, and learns its internal distribution of patches. Its Generator is trained to produce a downscaled version of the LR test image, such that its Discriminator cannot distinguish between the patch distribution of the downscaled image, and the patch distribution of the original LR image. The Generator, once trained, constitutes the downscaling operation with the correct image-specific SR-kernel. KernelGAN is fully unsupervised, requires no training data other than the input image itself, and leads to state-of-the-art results in Blind-SR when plugged into existing SR algorithms.
Algorithm EngineerDataGen Technologies
Nathan is an Algorithm Engineer at DataGen Technologies.
His research focuses on creating high quality simulated data for computer vision applications such as pose estimation.
Nathan previously worked at Intel as a Computer Vision Engineer and graduated Summa Cum Laude from Imperial College London with a MEng in Electrical Engineering and a thesis on Action Recognition.
Solving the Data Bottleneck with Simulated Data
In the computer vision industry, gathering and manually annotating data is the most substantial bottleneck in the development of deep learning solutions. A promising solution is to generate data through 3D simulations as they provide perfect annotations and densely sample edge cases that real datasets fail to capture. Yet, a known shortcoming of this method is the domain gap between the simulated and real world domains. We show it can be overcome through the mutual use of Photorealistic Simulation and Domain Adaptation. To validate our claim on a study case, we generated simulated datasets that achieve state-of-the-art performance for 2D hand joints estimation. In this talk, we will present this methodology as a base for solving practical computer vision challenges in a wide range of domains.
JA-POLS: a Moving-camera Background Model via Joint Alignment and Partially-overlapping Local Subspaces
Background models are widely used in computer vision. While successful Static-camera Background (SCB) models exist, Moving-camera Background (MCB) models are limited. Seemingly, there is a straightforward solution: 1) align the video frames; 2) learn an SCB model; 3) warp either original or previously-unseen frames toward the model. This approach, however, has drawbacks, especially when the accumulative camera motion is large and/or the video is long. Here we propose a purely-2D unsupervised modular method that systematically eliminates those issues. First, to estimate warps in the original video, we solve a joint-alignment problem while leveraging a certifiably-correct initialization. Next, we learn both multiple partially-overlapping local subspaces and how to predict alignments. Lastly, in test time, we warp a previously-unseen frame, based on the prediction, and project it on a subset of those subspaces to obtain a background/foreground separation. We show the method handles even large scenes with a relatively-free camera motion (provided the camera-to-scene distance does not change much) and that it not only yields State-of-the-Art results on the original video but also generalizes gracefully to previously-unseen videos of the same scene. The talk is based on [Chelly et al., CVPR '20]. This is joint work with Vlad Winter, Dor Litvak, Oren Freifed (all from BGU CS) and David Rosen (MIT).
Chief Architect and VP SystemHailo Technologies
Daniel is the Chief Architect and VP system of Hailo Technologies, a start-up developing a power efficient AI Processor chip for edge applications. Since 2017 he has been in charge of developing the novel structure defined dataflow architecture of Hailo’s IP and the architecture of Hailo’s chips.
Previously he held positions in Broadcom Ltd. and the intelligence technology unit of Israel's Intelligence corps.
Daniel Holds a B.Sc., Electrical Engineering from the Technion, Israel Institute of Technology.
Empowering AI: How to Build High Efficient Hardware for AI at the Edge?
As deep learning is showing potential value in different markets, there is an increasing need to be able to run inference efficiently on edge devices.
In this talk we will focus on the fundamental characteristics of deep learning algorithms, analyze the challenges they introduce to the classical 60 years old Von-Neuman processing approach and review the guidelines to building more efficient domain specific processing architecture.
Beginning with some theoretical reasoning behind domain-specific architectures and their implementation in the field of deep learning, and more specifically for machine vision applications. We will use various quantitative measures, and more detailed design examples in order to make a link between theory and practice.
Hailo has developed a specialized deep learning processor that delivers the performance of a data center-class computer to edge devices. Hailo’s AI microprocessor is the product of a rethinking of traditional computer architectures, enabling smart devices to perform sophisticated deep learning tasks such as imagery and sensory processing in real time with minimal power consumption, size and cost.
Sivan Doveh is a student researcher at the Computer Vision and Augmented Reality (CVAR) group at IBM Research AI.
She is also completed an MSc at Tel Aviv University under the supervision of Raja Giryes. Her research is focused on meta-learning.
DEGAS - Differentiable Efficient Generator Search
Beyond Weak Perspective for Monocular 3D Human Pose Estimation
Algorithm DeveloperWSC Sports
Sahar Froim is a senior algorithm developer at WSC Sports, focusing on the fascinating world of machine-learning in sports broadcasting. Sahar is currently pursuing a Ph.D. at the School of Electrical Engineering in Tel-Aviv University, where he studies the intersection between machine-learning & optics. Prior to WSC Sports, Sahar worked at Qualcomm for 4 years, while completing his B.Sc. & M.Sc. in Electrical Engineering (magna cum laude) at Tel-Aviv University. Sahar served for 6 years as an officer in an elite Israeli intelligence unit, leading a team of tech researchers.
Improving Object Detectors Using Preceding And Successive Convolutional Neural Networks
WSC Sports’ main goal is developing sports’ action detection and event recognition algorithms with a very high success rate, in order to automatically generate highlight videos from sports broadcasts. Due to its high recall rates, an object detector is a commonly used tool in our proprietary action detection algorithms
Its main limitation is its precision rate on a sports broadcast, where it gets “real world” images, which produce false positives.
In this presentation, we propose a method for improving object detection models’ precision, without enlarging the train set, by using preceding and successive convolutional neural networks.
Computer Vision Research EngineerRafael
Alex is an algorithm engineer and researcher at the computer vision department in Rafael. He works on deep learning approaches with applications including change detection, scene understanding, image compression and 3D reconstruction.
Alex completed his M.Sc in Electrical Engineering at the Technion, and B.Sc in Physics and Electrical Engineering at the Tel-Aviv University.
Change Detection Using Self-Supervised Metric Learning
Given a pair of images of the same geographic area taken at different times, we wish to detect changes between them. Change detection is a challenging task. It is required to distinguish between fundamental changes, often man made, and insignificant natural ones. The latter may result from changing lighting, weather, camera pose, slight vegetation movement due to wind, and small errors in image registration. We address the change detection problem by training a learned descriptor using registered image pairs. Our fully convolutional CNN-based descriptor can efficiently detect changes in large aerial image pairs. It is shown to generalize well for a completely new scene and type of changes, while being robust to registration errors. The labeling of each image pair as similar or different is implied by the automatic registration process. Therefore, no manual annotation of any kind is required. While the lack of supervision results in label noise, the algorithm proves highly robust to it.
ProVision Algorithm ManagerApplied Materials
I hold B.Sc. in Electronic and Computer Engineering from Ben-Gurion University,
and M.Sc. in Electronic and Computer Engineering from Tel Aviv University, Specialization in Signal and Image processing.
In the last years I manage an algorithm group at Applied Materials. We develop innovative Metrology methods in SEM images for the semiconductor industry.
One Shot Smantic Segmentation CNN with Automatic Pruning
Many real-world applications suffer from lack of ground-truth. we propose innovative an end to end network, dealing with zero-shot or few-shot segmentation.
We will show an innovative visual intuition that makes triplet-loss post processing redundant and enables end-to-end networks for many applications.
Oshri Halimi is a Ph.D. student in the electrical engineering faculty at Technion, supervised by Prof. Ron Kimmel.
Her research investigates geometric invariants and their application in computer vision and shapes analysis. In particular, she is interested in the interface between geometry and deep learning.
She published in top-tier conferences for computer vision (CVPR, ECCV) and organized workshops in the field: "iGDL 2020: Israeli Geometric Deep Learning Workshop" and "Learning and Processing of Geometric Visual Structures," SIAM Conference on Imaging Science (SIAM-IS20). She was awarded the Israel Ministry of Science Jabotinsky Fellowship for Doctoral Students.
She holds B.Sc in physics and electrical engineering from Technion, which she graduated cum laude. She is an alumna of the Technion Excellence Program, the Archimedes Program, and a bronze medalist in the IChO. She served in Unit 8200.
Unsupervised Learning of Dense Shape Correspondence
We introduce the first completely unsupervised correspondence learning approach for deformable 3D shapes.
Key to our model is the understanding that natural deformations, such as changes in pose, approximately preserve the metric structure of the surface, yielding a natural criterion to drive the learning process toward distortion-minimizing predictions. On this basis, we overcome the need for an- notated data and replace it by a purely geometric criterion. The resulting learning model is class-agnostic, and is able to leverage any type of deformable geometric data for the training phase. In contrast to existing supervised approaches which specialize on the class seen at training time, we demonstrate stronger generalization as well as applicability to a variety of challenging settings. We showcase our method on a wide selection of correspondence benchmarks, where the proposed method outperforms other methods in terms of accuracy, generalization, and efficiency.
Senior Lecturer at Faculty of Electrical EngineeringHolon Institute of Technology
Dr. Amir Handelman received his BSc, MSc and PhD degrees in Electrical Engineering in 2008, 2011 and 2014, respectively, all from Tel-Aviv University, Israel. In 2014, Amir joined the faculty of Electrical Engineering in Holon Institute of Technology (HIT) as a tenure-track faculty member and established there the Applied Optics and Machine Vision Lab. In addition to his academic background, Amir has over 10 years' experience in computer vision and optics, which he gained during his works in several Hi-Tech companies, such as Israel Aerospace Industries (IAI), Volume-Elements Ltd., and KLA-Tencor.
How Computer Vision Improves Surgeons’ Performance?
Lead Computer Vision ResearcherNovocure
Michal Holtzman Gazit is a lead computer vision researcher in Novocure, with nearly 20 years of experience in the field of computer vision and image processing and medical images. She received her BSc. (1998) and MSc. (2004) in Electrical Engineering Technion, and her PhD (2010) in Computer Science, Technion. During 2010-2012, she was a post-doctorate fellow in the computer science department in the University of British Columbia, Vancouver, Canada. Her main research interests are computer vision, image processing, AI in healthcare and deep learning.
From Scan to Treatment: Fast Estimation for Tumor Treating Fields
Senior Research Scientist, Research Team LeadIBM Research AI
Leonid Karlinsky leads the CV & DL research team in the Computer Vision and Augmented Reality (CVAR) group @ IBM Research AI. Before joining IBM, he served as a research scientist in Applied Materials, Elbit, and FDNA. He is actively publishing and reviewing at ECCV, ICCV, CVPR and NeurIPS, and is serving as an IMVC steering committee member for the past 3 years. His recent research is in the areas of few-shot learning with specific focus on object detection, metric learning, and example synthesis methods. He received his PhD degree at the Weizmann Institute of Science, supervised by Prof. Shimon Ullman.
Explainable, Adaptive, and Cross-Domain Few-Shot Learning
In this talk we will discuss our recent advances in few-shot learning, a regime where only a handful of training examples (maybe just one) are available for learning novel categories unseen during training. We will cover a method for few-shot classification that is capable of matching and localizing instances of novel categories, despite being trained and used with only category level image labels and without any location supervision, also opening the door for weakly supervised few-shot detection. We will cover a method for meta-learning a model that automatically modifies its architecture to better adapt to novel few-shot tasks. Finally, we will discuss the limitation of the current few-shot learning methods when handling extreme cases of domain transfer, and offer a new benchmark and some ideas towards cross-domain few-shot learning.
Hagai Lalazar is a computational neuroscientist and entrepreneur, with over 20 years experience in neuroscience research and innovative startups in Tel-Aviv and Silicon Valley. He completed his BA in Math and Cognitive Science at UC Berkeley, PhD in neural computation at the Hebrew University, and Postdoc at the Center for Theoretical Neuroscience at Columbia. He’s an expert in motor neuroscience and Brain-Machine Interfaces. He was CTO and Co-Founder of System1 Biosciences, a startup developing a robotic platform for growing 3D brain cultures for drug discovery. He is currently CEO and Co-Founder of a stealth-mode startup, developing breakthrough neuro-technology powered by deep learning.
Brain-Machine Interfaces: The Next Frontier in Human-Computer Interactions
We are at the forefront of a revolution in how we interact with the world. Brain-Machine Interfaces (BMI) is technology that uses neural signals and advanced algorithms to decode the brain, enabling interactions with devices using only your mind. Recent advances in BMI are based on progress in invasive and non-invasive sensors, and the marriage of neuroscience and machine learning. In the next few years, invasive BMI applications will restore movement to paralyzed patients, enable stroke patients to speak, and enable the blind to see. Non-invasive applications will change the way we interact with devices, digital experiences, entertainment, and mental healthcare.
3D Metrology Algorithm Team LeaderApplied Materials
Dr. Anna Levant is a 3D metrology algorithm team leader at Applied Materials. She holds her PhD degree from Weizmann Institute of Science in Applied Mathematics, specifically Chaos problem. Prior to joining Applied Materials, she worked for 10 years in various medical devices companies leading the development of algorithms for various modalities as MRI, X-ray, ECG etc.
3D Metrology: Seeing the Unseen
3D metrology is a new fascinating field in the semiconductor industry. Shrinkage of planar devices has reached its physical limit and advanced nodes resort to 3D design to increase the feature density in the device. Reliable measurements of these 3D structures are crucial for a chip development process.
We propose a novel supervised ML (Machine Learning) based solution for inferring 3D structure from 2D SEM (Scanning Electron Microscope) images. Our algorithm reached sub-nanometer accuracy and high precision.
The generality of our method and its ability to extract hidden information from SEM images open the door to a plethora of applications in 3D metrology for memory and logic devices.
Algorithm EngineerAlibaba DAMO Israel Lab
Hussam is an Algorithm Engineer in Alibaba DAMO Israel Lab. Hussam enjoys doing applied research on pose estimation, person Re-Identification, and image classification. Prior to Alibaba, Hussam worked in several companies in the retail business as a senior android developer.
Hussam completed his B.Sc in tandem with high school studies as a part of "Etgar" program at the University of Haifa.
Compact Network Training for Person ReID
There is a growing interest in person re-identification (ReID) but most studies can not be used for real-world applications.
Academic leading deep learning ReID models remain large-sized and computationally expensive, therefore unfavourable for real deployments in scalability, and specifically, their architectures are not suited for running on the CPU.
In this talk, we will explore training techniques and architecture modifications for training an efficient and compact network for person ReID, that achieves state-of-the-art results while having 10x fewer parameters and 13x fewer FLOPS.
Then, we will shortly present how we can apply our learning and techniques, such as the soft triplet loss, on other scenarios such as for the fine-grained image classification task.
Senior Machine Learning ResearcherNexar
How to Build High-Quality Maps from Noisy and Unlabeled Data
Building fresh accurate maps of road items is a key ingredient in smart cities management and enabling fully autonomous vehicles. Building such maps from chip sensors such as monocular camera, GPS sensor and IMU, is a major challenge. It is even harder doing it in crowdsourcing setting, where the data is noisy and the camera position is arbitrary and unknown.
In this talk, we address this problem and related issues, namely; Camera alignment, self-localization, depth estimation, etc’. We demonstrate that using self-supervised approaches along with large corpus of diverse noisy-unlabeled data, we can get surprisingly accurate results.
Staff Researcher at the Smart Sensing and Vision GroupGeneral Motors R&D Israel
I am a Staff Researcher at the Smart Sensing and Vision group, General Motors R&D Israel, in the fields of computer vision and machine learning. I received his B.Sc. degree (with honor) in mathematics and computer science from the Tel-Aviv University, in 2000, and the M.Sc. and PhD degrees in applied mathematics and computer science at the Weizmann Institute, in 2004 and 2009 respectively. In the Weizmann Institute I conducted research in human and computer vision under the supervision of Professor Shimon Ullman. Since 2007 I have been conducting industrial computer vision research and development at several companies including General Motors and Elbit Systems, Israel.
3D-LaneNet: End-to-End 3D Multiple Lane Detection
We introduce a network that directly predicts the 3D layout of lanes in a road scene from a single image. This work marks a first attempt to address this task with on-board sensing without assuming a known constant lane width or relying on pre-mapped environments. Our network architecture, 3D-LaneNet, applies two new concepts: intra-network inverse-perspective mapping (IPM) and anchor-based lane representation. The intra-network IPM projection facilitates a dual-representation information flow in both regular image-view and top-view. An anchor-per-column output representation enables our end-to-end approach which replaces common heuristics such as clustering and outlier rejection, casting lane estimation as an object detection problem. In addition, our approach explicitly handles complex situations such as lane merges and splits. Results are shown on two new 3D lane datasets, a synthetic and a real one. For comparison with existing methods, we test our approach on the image-only tuSimple lane detection benchmark, achieving performance competitive with state-of-the-art.
Computer Vision ResearcherRafael
Uriya is currently a computer vision researcher at Rafael. He has worked on noise removal from imagery for noise-sensitive sensors and on change detection. His current research is focused on unsupervised change detection on aerial images, based on metric-learning.
He has an Electrical Engineering M.Sc. from Tel Aviv university, specializing in computer vision algorithms and software development.
Principal Data ScientistBooking.com
Pavel Levin is a Principal Data Scientist at Booking.com, one of the world's leading digital travel platforms. Over the past five years with the company, he has worked on a number of important AI products, including the Booking Assistant (a customer service chatbot), an in-house machine translation engine, various recommendation and personalization applications and computer vision projects to create an even smoother, insightful and relevant experience on Booking.com. Trained as an applied mathematician, he has keen interest in all applied aspects of statistical models, learning algorithms and data science in general.
Generalizable Representations of Hotels’ Image Galleries through Multi-Task Learning
In today's increasingly visual world of e-commerce products are often accompanied by photo galleries describing various product aspects. We are going to deep dive into the travel accommodations use case and discuss a deep learning-based solution to the problem of finding meaningful representations of hotel galleries in a large scale e-commerce setting. The universality of embeddings and their flexibility to new downstream tasks is achieved through training the gallery encoder on multiple independent tasks using multi-task learning (MTL) approach. To evaluate the role of MTL in gallery encoding we look at how the performance of the joint MTL-trained model on each task compares to the model performances of separately trained end-to-end models. To assess the quality of learned representations we mainly look at their performance in downstream applications.
Engineering Manager – Image Processing MathWorks
Analysis and Segmentation of Very Large Pathology Images Using MATLAB
Computer Vision Department ManagerPercepto
Ovadya joined Percepto on January 2019 as a Computer Vision team leader. With over 20 years of experience building Computer Vision solutions in the industry with companies such as Intel Corporation, Applied Materials and PointGrab. Ovadya’s last position was with Innoviz-Tech, headed the Computer Vision department. Ovadya set the foundation for the Innoviz-Tech Computer Vision department, including defining the computer vision product specs Ovadya has vast experience in Computer Vision applications, including Deep learning, Object detection and tracking in mass production such as Samsung TV.
Ovadya holds an Msc. degree in the field of computer vision from the Weizmann Institute of Science.
End-to-End Change Detection for High Resolution Drone Images with GAN Architecture
Monitoring large areas is presently feasible with high resolution drone cameras, as opposed to time-consuming and expensive ground surveys. In this work we reveal for the first time, the potential of using a state-of-the-art change detection GAN based algorithm with high resolution drone images for infrastructure inspection. We demonstrate this concept on solar panel installation. A deep learning, data-driven algorithm for identifying changes based on a change detection deep learning algorithm was proposed.
We use the Conditional Adversarial Network approach to present a framework for change detection in images. The proposed network architecture is based on pix2pix GAN framework. Extensive experimental results have shown that our proposed approach outperforms the other state-of-the-art change detection methods.
Research ScientistElbit Systems Aerospace
Yakov Miron is a BScEE from Ben-Gurion university and an MScEE from Tel Aviv university.
He was working for Motorola Inc. and Silentium as an algorithm developer.
His current position is Computer Vision and Deep Learning Researcher in the R&D division at Elbit Systems Aerospace.
His interest topics are Machine Learning, Deep Learning, Computer Vision, 3D Modeling, as well as Navigation, Localization and SLAM.
Generating Photo-Realistic Images from Simulation and Computer Graphics
Computer Graphics images are commonly used in various fields like Medical imaging, gaming, animation, Augmented Reality and many more.
Contemporary Graphic Engines are able to produce scenes of limited photorealism.
Senior Machine Learning ManagerBooking.com
Guy is the Senior Machine Learning Manager @Booking.com's Tel Aviv ML Center, where he leads a large group of talented ML scientists and engineers tackling diverse AI problems (recommendation systems, vision, NLP, RL and more).
Prior to that, Guy was the Director of AI Research at FDNA, where he helped diagnose rare genetic disorders in children. He also founded the AI in Genomics community in Israel, with over 1500 members.
Guy brings vast experience in dealing with problems across multiple domains using machine learning (text, images, acoustical & bio-medical signals, finance and genomics).
Guy holds a BSc & MSc in EE and an MBA, all from Tel Aviv University.
Senior Applied ScientistAmazon
Assaf is a Senior Applied Scientist at Amazon. Since 2015, he has taken part in various deep learning projects, mostly in the Fashion AI domain.
Before joining Amazon, Assaf was a computer vision researcher at the Israeli Intelligence Corps and a senior algorithm engineer at medical and cybersecurity startups.
Assaf has published papers in IEEE, KDD and CVPR, and holds a BSc and MSc in Electrical Engineering from Tel-Aviv University.
Image Based Virtual Try-On Network From Unpaired Data
This paper presents a new image-based virtual try-on approach (Outfit-VITON) that helps visualize how a composition of clothing items selected from various reference images form a cohesive outfit on a person in a query image. Our algorithm has two distinctive properties. First, it is inexpensive, as it simply requires a large set of single (non-corresponding) images (both real and catalog) of people wearing various garments without explicit 3D information. The training phase requires only single images, eliminating the need for manually creating image pairs, where one image shows a person wearing a particular garment and the other shows the same catalog garment alone. Secondly, it can synthesize images of multiple garments composed into a single, coherent outfit; and it enables control of the type of garments rendered in the final outfit. Once trained, our approach can then synthesize a cohesive outfit from multiple images of clothed human models, while fitting the outfit to the body shape and pose of the query person. An online optimization step takes care of fine details such as intricate textures and logos. Quantitative and qualitative evaluations on an image dataset containing large shape and style variations demonstrate superior accuracy compared to existing state-of-the-art methods, especially when dealing with highly detailed garments.
Computer Vision Algorithm Team LeaderEyesight Technologies
Deep Face Tracking By 3d Alignment – It is All In the (Semi-Synthetic) Data
Senior Algorithm Researcher Alibaba DAMO Israel Lab
TResNet: High Performance GPU-Dedicated Architecture
Many deep learning models, developed in recent years, reach higher ImageNet accuracy than ResNet50, with fewer or comparable FLOPS count. While FLOPs are often seen as a proxy for network efficiency, when measuring actual GPU training and inference throughput, vanilla ResNet50 usually offers better throughput-accuracy trade-off.
In this talk, we will discuss the bottlenecks induced by FLOPs-optimizations, and suggest alternative design patterns that better utilize GPU structure and assets. We then introduce a new family of GPU-dedicated models, called TResNet, which achieves better accuracy and efficiency than previous ConvNets.
We demonstrate that on ImageNet, all along the top1 accuracy curve TResNet gives better GPU throughput than existing models. In addition, on three commonly used downstream single-label classification datasets it reaches new state-of-the-art accuracies. We also show that TResNet generalizes well to other computer vision tasks, reaching top scores on multi-label classification and object detection tasks.
Faculty Member, School of Electrical and Computer EngineeringBen-Gurion University
QANet -A Quality Assurance Neural Network for Image Segmentation
In this talk I will introduce a novel Deep Learning framework, which quantitatively estimates image segmentation quality without the need for human inspection or labeling. We refer to this method as a Quality Assurance Network - QANet. Specifically, given an image and a ‘proposed’ corresponding segmentation, obtained by any method including manual annotation, the QANet solves a regression problem in order to estimate a predefined quality measure (or example the IoU or a Dice score) with respect to the unknown ground truth. The QANet is by no means yet another segmentation method. Instead, it performs a multi-level, multi-feature comparison of an image-segmentation pair based on a unique network architecture, called the RibCage.
To demonstrate the strength of the QANet, we addressed the evaluation of instance segmentation using two different datasets from different domains, namely, high throughput live cell microscopy images from the Cell Segmentation Benchmark and natural images of plants from the Leaf Segmentation Challenge. While synthesized segmentations were used to train the QANet, it was tested on segmentations obtained by publicly available methods that participated in the different challenges. We show that the QANet accurately estimates the scores of the evaluated segmentations with respect to the hidden ground truth, as published by the challenges’ organizers.
Chief Science Officer & Head of AISeeTree Systems Inc.
I am leading the science, computer vision and ML activities of SEETEE.AI, a new successful start-up in the agtech domain, since its pre-seed development. As former senior director in algorithms and R&D in Mobileye, I leverage 13 years of experience in Mobileye to help introduce a similar transformation to the agtech world, together with an excellent multidisciplinary team.
Semantic Spatial Alignment for Image Registration in Remote Sensing
We introduce a new method of image-registration, named "semantic spatial alignment" (SSA).
This method performs an optimization of the semantic difference loss between two images, using a gradient-descend-process which optimizes the parameters of a neural-network composed of a single differentiable spatial-transformer. This new method shows a dramatic improvement over state-of-the-art feature-point-matching methods (e.g SIFT, ORB), when inputs are time-repeating orthomosaics of tree-plantations, where inputs can be from different sensors and resolutions, and contain changes in the shape of the tree objects. The method is also superior in cases where the success of affine, projective or other simple homographic transformation maps are limited. The method shows a successful use of deep learning in dramatically improving a traditional "classical computer vision task" as image-registration.
PhD candidate, EE FacultyTechnion
Tamar Rott Shaham is a PhD candidate at the Electrical Engineering faculty in the Technion - Israel Institute of Technology, under the supervision of Prof. Tomer Michaeli, where she also received her B.Sc. in 2015. Her research interests are in Image Processing and Computer Vision. Tamar won several awards including Adobe Research Fellowship (2020), ICCV 2019 Best Paper Award (Marr Prize), Google WTM Scholar (2019), The Israeli Higher Education Council Scholarship for Data Science PhD students, and the Schmidt Postdoctoral Award.
SinGAN: Learning a Generative Model from a Single Natural Image
We introduce SinGAN, an unconditional generative model that can be learned from a single natural image. Our model is trained to capture the internal distribution of patches within the image, and is then able to generate high quality, diverse samples that carry the same visual content as the image. SinGAN contains a pyramid of fully convolutional GANs, each responsible for learning the patch distribution at a different scale of the image. This allows generating new samples of arbitrary size and aspect ratio, that have significant variability, yet maintain both the global structure and the fine textures of the training image. In contrast to previous single image GAN schemes, our approach is not limited to texture images, and is not conditional (i.e. it generates samples from noise). User studies confirm that the generated samples are commonly confused to be real images. We illustrate the utility of SinGAN in a wide range of image manipulation tasks.
Mattan Serry is a data scientist at Microsoft Media AI Research, where he is working on Video Indexer, an Azure service for video analytics.
He holds a bachelor’s degree in computer engineering and a master’s degree in computer science, both from Tel Aviv University.
His thesis work is in the fields of computer vision, deep learning, and 3D understanding, under the supervision of Dr. Amit Bermano.
Before joining Microsoft in 2019, he also worked at Apple and Sony Semiconductor Israel.
Utilizing Learned 3D Attributes for Head Localization from Face Detection
Face and head detection are fundamental problems in computer vision and related areas.
They are a common thread in contemporary applications such as augmented reality, biometric authentication, and social media.
The work presented in this lecture is a novel approach for head localization.
The suggested algorithm infers a person's head location given a prior face detection and other extracted features from the image.
Some features are mandatory and some are optional.
The algorithm uses classical methods – and thus it excels in computational efficiency – to solve the 2D to 2D problem via a 3D person polygon mesh, by understanding the most appropriate model and transformation.
AI & Data Science ResearcherIntel
Adi is a member of the core AI & data science research team of Intel’s Advanced Analytics group (Deep learning, NLP and computer vision research for sales and marketing, manufacturing, healthcare), in parallel to PhD research at the Hebrew University’s Computer Science department, supervised by Prof. Leo Joskowicz.
Adi holds an M.Sc in Bio-Engineering, an M.E. in Bio-Medical Engineering and a B.Sc in Electronics engineering.
A Weak Supervision Approach to Detecting Visual Anomalies for Automated Testing of Graphics Units
We present a deep learning system for testing graphics units by detecting novel visual corruptions in videos. Unlike previous work in which manual tagging was required to collect labeled training data, our weak supervision method is fully automatic and needs no human labelling. This is achieved by reproducing driver bugs that increase the probability of generating corruptions, and by making use of ideas and methods from the Multiple Instance Learning (MIL) setting. In our experiments, we significantly outperform self-supervised methods such as GAN-based models and discover novel corruptions undetected by baselines, while adhering to strict requirements on accuracy and efficiency of our real-time system.
Senior Machine Learning & Computer Vision Researcher Zebra Medical Vision
Eyal Ziv is a senior ML and computer vision researcher at the R&D team of Zebra Medical Vision.
Eyal’s research is focusing on classification and detection of lesions in x-ray images.
Prior to joining Zebra Medical Vision, he worked as a ML researcher in the Aerospace division of Elbit Systems, developing algorithms for autonomous platforms. Eyal Ziv holds his BSc in Aerospace Engineering from the Technion institute.
His topics of interest are multimodal learning and meta learning.
From Algorithms to FDA: Improving Patients' Care with AI-Based Triaging Solutions
The chest radiography is by far the most commonly performed radiological examination for screening and diagnosis of many cardiac and pulmonary diseases. However, there is an immense world-wide decrease in the number of physicians capable of providing their rapid and accurate interpretation. With 2 billion people joining the middle class worldwide and a growing global shortage of clinical experts, there is a sense of urgency to develop technologies which can help bridge the gap between supply and demand of radiology services.
Here we will review the research and development of Zebra-Medical AI based solutions aimed at providing automated and scalable diagnostic support in interpretation of chest radiographs.
We'll demonstrate the application of our technology on real life clinical examples where such solutions have impacted patients' care by substantially reducing time to treatment and preventing misdiagnosis.
Chief Business OfficerHailo
Hadar is CBO and Co-Founder of Hailo. Before this role, she served as the first Product Manager at Via Transportation, where she managed multiple core projects including the overseeing of algorithms and the development of products. She also brings a decade of technological experience from the IDF’ elite intelligence unit, where she served in various leadership positions, including Chief Architect and led the Unit’s flagship R&D project which was ultimately recognized with the General Chief of Staff Award for Technological Excellence.
Hadar holds a B.Sc. in Physics and Math from the Hebrew University and an MBA from Northwestern University and Tel Aviv University.
94, Yigal Alon St.
Tel Aviv 6109202