November 24, 2015 between 9h00 and 17h00
Workshop : Workshop on Stochastic Geometry and Big Data Overview: The workshop will be held on November 24th, 2015 at INRIA Sophia Antipolis Méditerranée, France.
The workshop is open to the public and provides a great networking opportunity for individuals involved or interested in stochastic geometry and big data analysis with applications in computer vision and image processing. The workshop focuses on point processes, spatial stochastic fields, Bayesian theory and data mining methods applied to curvilinear structure reconstruction, multiple target tracking, change detection or image segmentation and classification.
Invited speakers: Prof. Pascal Fua, École Polytechnique Fédérale de Lausanne, Switzerland
Prof. Tamás Szirányi, MTA Sztaki and Univ. of Technology and Economics Budapest, Hungary
Prof. Alfred M. Bruckstein, Technion – Israel Institute of Technology, Haifa, Israel
Prof. Daniela Zaharie, West University of Timișoara, Romania
Prof. Ba-Ngu Vo, Curtin University, Perth, Australia
Dr. Mathias Ortner, Airbus D&S and IRT Saint Exupéry, Toulouse, France
Prof. Michel Schmitt, Institut Mines-Telecom, Paris, France (This presentation is cancelled due to personal reasons)
October 26, 2015 at 14h30
Invited speaker : Prof. Rozen Dahyot, School of Computer Science and Statistics, Trinity College Dublin, Ireland Title: Functions and functionals for shape registration, and color transfer Abstract: In this talk, I will present some examples of applications (e.g. shape registration and color transfer) where the representative features computed from data are continuous functions and in particular here probability density functions (pdf). One advantage of representing data with functions is to alleviate issues related with the various resolution or sampling rate used to capture the data. Many problems of interest in computer vision and pattern recognition can then be expressed as matching and registering functions. Several divergences have been defined for matching pdfs, and this talk will focus on robust and computationally efficient ones. Illustrations of these techniques will be shown on various applications including shape registration, multiple ellipse detection, color transfer, and eye tracking data analysis. Bio: In 1998 Rozenn Dahyot won a scholarship from the Laboratoire des Ponts et Chaussées to pursue a PhD in image processing (awarded 2001). Since 2002, she has been working in Trinity College Dublin attracting about 1 Million Euros worth of grants from various sources (e.g. EU, industries, Irish Research Council, and Enterprise Ireland) collaborating on projects on multimedia data analysis. Currently an Assistant Professor in Statistics in Trinity, Rozenn is coordinating a European project on social media analytics (FP7-PEOPLE-2013-IAPP 612334, 2014-18) Location: Coriolis
September 21, 2015 at 14h30
Invited speaker : Minh-Tan Pham, Phd Student at Dept. Image et Traitement Information, Telecom Bretagne Title: Pointwise approach for local texture characterization from very high resolution images
Abstract: This PhD work involves the analysis and characterization of local textural features from VHR images. Classical methods (Co-occurrence matrices, Local histogram analysis, Markov models, etc.) are mainly based on a dense approach employing local neighborhoods around pixels and implicitly required the stationarity hypothesis which may not be verified within VHR imagery. Hence, they are not relevant anymore when the areas to be characterization become too small or do not respect the stationarity hypothesis. That is why we propose to exploit a pointwise approach based on characteristic points only, not on all image pixels, to represent and characterize different types of textures from VHR images. The advantage of using such a pointwise approach is that it does not require any stationary condition and moreover, it is able to deal with large-size image data. The proposed approach has been applied to perform the texture-based classification for VHR images (panchromatic and multispectral). Two strategies have been developed: a pointwise graph-based model and a pointwise covarariance-based descriptor. Experimental study provides very competitive results in terms of texture discrimination as well as algorithm complexity compared to reference methods. Bio: Minh-Tan Pham received the Engineer degree and the Research Master degree in electronics and telecommunications from the Institut Mines Telecom, TELECOM Bretagne, Brest, France, in 2013, where he is currently working toward the Ph.D. degree at the Image and Information Processing department (ITI). His research interests include signal and image processing applied to remote sensing imagery, especially in texture characterization, classification and change detection using very high resolution optical and Synthetic Aperture Radar (SAR) data. Furthermore, his research work involves the analysis and processing of signal and image on graphs. Location: Byron Beige
July 27, 2015 at 14h30
Invited speaker : Prof. Hassan Foroosh, Dept. of EECS, University of Central Florida (USA) Title: Cross-Media Thematic Relation Mining and Applications
Abstract: Multimedia datasets are growing rapidly with the expansion of the World Wide Web. Systems involving collaborative knowledge mining in heterogeneous data types are gaining ever growing importance. Image and text are two of the most popular data types. Many applications involving search through different data types require processing of both textual and visual data in reference to each other. In this talk, I will present some of our recent and ongoing work on various aspects of this problem, including automatic image annotation and caption generation and thematic relationship mining across visual and textual data. We propose a generative model for image annotation that incorporates the estimated context information along with the visual content of images. We also show that using group sparse learning, one can establish meaningful semantic relationships between visual features and textual data. We used news data to evaluate our methods – a dataset of over 20,000 images and their associated texts (articles, captions, keywords, categories, etc. are collected). For annotation and caption generation, we use the original image captions as the ground truth and successfully recover accurate captions and textual descriptions that include contextual information. For thematic relation mining between visual and textual data, our method is unsupervised and does not require pre-specification of a rule-based grammar, or external databases at any stage. These methods define a unified framework of representing and relating information across multiple heterogeneous data types in terms of probability distributions, and in particular attempt to unify the two worlds of image processing and natural language processing. Our experiments show considerable improvement over the state of the art, and promise to impact applications such as visualization, retrieval, and information summarization. Bio: Hassan Foroosh is a Professor of computer science in the Department of Electrical Engineering and Computer Science at the University of Central Florida (UCF). He has authored and co-authored over 120 peer-reviewed journal and conference papers, and has been in the organizing and the technical committees of many international conferences. Dr. Foroosh is a senior member of the IEEE, and an Associate Editor of the IEEE Transactions on Image Processing (since 2011). He was also an Associate Editor of the IEEE Transactions on Image Processing in 2003-2008. In 2004, he was a recipient of the Pierro Zamperoni award from the International Association for Pattern Recognition (IAPR). He also received the Best Scientific Paper Award in the International Conference on Pattern Recognition of IAPR in 2008. His research is currently sponsored by NASA, NSF, DIA, Navy, and industry. Location: Euler Bleu
June 22, 2015 at 14h30
Invited speaker : Ali Madooei, PhD student at the School of Computing Science at Simon Fraser University (Canada) Title: Color for computer-aided dermoscopy image analysis: from low-level features to high-level semantics
Abstract: Color assessment is essential in the clinical diagnosis of skin cancers. Lesions with dark, bluish or variegated colors are deemed to be more likely to be malignant. Indeed, the crucial role of color cues is evident as most clinical diagnosis guidelines (such as the “ABCD rule” and the “7-point checklist”) include color for lesion scoring.
Due to this diagnostic importance, many studies have either focused on or employed color features as a constituent part of their skin lesion analysis systems. These studies range from employing low-level color features, such as simple statistical measures of colors occurring in the lesion, to availing themselves of high-level semantic features such as presence of blue-white veil, globules or color variegation in the lesion.
This presentation will provide an exposition of our recent contributions in this research direction. In particular, we describe two novel approaches for utilizing color both as low-level and high-level image feature.
The first contribution describes a technique that employs the stochastic Latent Topic Models framework to allow quantification of melanin and hemoglobin content in dermoscopy images. Such information bears useful implications for analysis of skin hyper-pigmentation, and for classification of skin diseases.
The second contribution is a novel approach to identify one of the most significant dermoscopic criteria in the diagnosis of Cutaneous Melanoma: the
Blue-whitish structure. We achieve this goal in a Multiple Instance Learning framework with only image-level labels of whether the feature is present or not. As the output, we predict the image label and also localize the feature in the image.
Experiments are conducted on a challenging data set with results outperforming state-of-the-art. This study provides an improvement on the scope of modelling for computerized image analysis of skin lesions, in particular in that it puts forward a framework for identification of local dermoscopic features
from weakly-labelled data.
Bio: Ali Madooei received the BSc degree in Computing Science and Artificial Intelligence with high distinction from the Staffordshire University (U.K.) in 2010. He is currently working towards the PhD degree at the School of Computing Science at Simon Fraser University (Canada).
His research interests span the areas of Computer Vision, especially with respect to application to Medical Image Analysis. He has been particularly working on incorporating computer vision in dermatology practice. His thesis focuses on early detection of Cutaneous Melanoma with an emphasis on incorporating color features.
Location: Lagrange Gris
April 27, 2015 at 14h30
Invited speaker : Jorge Prendes, joint PhD student (Supelec/IRIT) at TésA, Toulouse, France Title: Analysis of remote sensing multi-sensor heterogeneous images
Abstract: Remote sensing images are those images of the Earth acquired from planes or satellites. In recent years the technology enabling this kind of images has been evolving really fast. Many different sensors have been developed to measure different properties of the earth surface, including optical images, SAR images and hyperspectral images. One of the interest of this images is the detection of changes on multitemporal set of images. Change detection has been thoroughly studied on the case where the multitemporal dataset consist of images acquired by the same sensor. However, nowadays it is very common having to deal with datasets containing images acquired from different sensors.
To deal with this kind of datasets we proposed a statistical model to describe the joint distribution of the pixel intensity of the images, more precisely a mixture model. On unchanged areas, we expect the parameter vector of the model to belong to a manifold related to the physical properties of the objects present on the image, while on areas presenting changes this constraint is relaxed. The distance of the model parameter to the manifold can be thus be used as a similarity measure, and the manifold can be learned using ground truth images where no changes are present. The model parameters are estimated through a collapsed Gibbs sampler using a Bayesian non parametric approach combined with a Markov random field.
In this talk I will present the proposed statistical model, its parameter estimation, and the manifold learning approach. The results obtained with this change detection approach will be compared with those of other classical similarity measures.
Bio: Jorge Prendes was born in Santa Fe, Argentina in 1987. He received the 5 years Eng. degree in Electronics Engineering with honours from the Buenos Aires Institute of Technology (ITBA), Buenos Aires, Argentina in July 2010. He worked on Signal Processing at ITBA within the Applied Digital Electronics Group (GEDA) from July 2010 to September 2012.
Currently he is a Ph.D. student in Signal Processing from the École supérieure d’électricité (Supélec), within the cooperative laboratory TéSA and the Signal and Communication Group of the Institut de Recherche en Informatique de Toulouse (IRIT). His main research interest include image processing, applied mathematics and pattern recognition.
Location: Lagrange Gris
March 16, 2015 at 14h30
Invited speaker : Ganchi Zhang, University of Cambridge, UK Title: Tree-structured Bayesian group-sparse modelling with wavelets
Abstract: We present a recent wavelet-based image restoration framework based on a group-sparse Gaussian scale mixture model. A hierarchical Bayesian estimation is derived using a combination of variational Bayesian inference and a subband-adaptive majorization-minimization method that simplifies computation of the posterior distribution. We show that both of these iterative methods can converge simultaneously and thus find good solutions in the non-convex search space. We also integrate our method, VBMM, with Markov-tree Bayesian modeling of wavelet coefficients. Based on a group sparse GSM model with 2-layer cascaded Gamma distributions for the variances, the proposed method effectively exploits both intrascale and interscale relationships across wavelet subbands. The experimental results demonstrate that the proposed method and its tree-structured extensions are effective for various imaging applications such as image deconvolution, image superresolution and compressive sensing MRI reconstruction, and that they outperform more conventional sparsity-inducing methods based on the l1-norm. Bio: Ganchi Zhang received the B.Eng. degree in electrical engineering from the University of Strathclyde, Glasgow, U.K. and the M.Phil. degree in industrial engineering from the University of Cambridge, Cambridge, U.K., in 2011 and 2012, respectively. He is currently working towards the Ph.D. degree in the Signal Processing and Communications Laboratory, Department of Engineering, University of Cambridge, Cambridge, U.K.. His research interests include image enhancement, wavelet-based techniques, compressive sensing and Bayesian inference. Location: Coriolis
February 13, 2015 at 14h30
Invited speaker : Dr. Marc Antonini, French National Center for Scientific Research (CNRS) and I3S, France Title: Coding and visualization of surface meshes
Abstract: Nowadays, the spectacular development of 3D data acquisition techniques allows the acquisition of very high resolution surface meshes leading to objects with hundreds of millions of polygons. The problem arises when it comes to visualize, to store, or even to transmit these data via networks with limited bandwidths. Indeed, the 3D data, easily exceeding several gigabytes, are difficult to handle by the current workstations and even less by smartphones, tablet computers….
In this context, we propose in this presentation a joint compression/visualization solution useful for dense surface meshes. The proposed solution is based on a multi-resolution wavelet analysis on the surface and lattice vector quantization. Thanks to a GPU implementation, this approach provides the ability to display in very fast time high resolution 3D objects (several millions of polygons) while optimizing the quality of the displayed object. This work will be illustrated with data generated by the company Cintoo 3D spin-off of the University of Nice-Sophia Antipolis and CNRS.
Bio: Marc Antonini received the PhD degree in electrical engineering in 1991 and the “Habilitation à Diriger des Recherches” in 2003, both from the University of Nice-Sophia Antipolis (France). He was a post doctoral fellow at the Centre National d’Etudes Spatiales (CNES, Toulouse, France), in 1991 and 1992. He joined the CNRS in 1993 at the I3S laboratory UMR 7271 of the University of Nice-Sophia Antipolis and CNRS where he leads the MediaCoding research group (www.i3s.unice.fr/mediacoding). He is “Directeur de Recherche CNRS” since 2004 and the co-author of more than 200 publications, 8 book chapters and more than 10 patents. He has a strong experience in image and video coding using wavelet transform. His current research interests include image coding, video coding, geometry processing, surface mesh coding and animation coding, and digital holography coding. He is also interested in the analysis of the information contained by the neural code in the visual system, with applications in bio-inspired image and video compression. He was the advisor of 22 former PhD students in the field and is currently the advisor of 5 PhD students. He founded Cintoo3D in 2013, a Start-Up providing solutions for 3D data streaming and visualization (www.cintoo3d.com). Location: Coriolis
December 15, 2014 at 14h30
Invited speaker : Brahim Boussidi, Institut Mines Télécom/ Télécom Bretagne, France Title: Inter-scale texture modeling and synthesis using conditional Gaussian fields: application to super-resolution of ocean remote sensing data
Abstract: We are interested here in the inter-scale texture modeling and simulation. We consider conditional Gaussian models obtained as solutions of stochastic partial differential equations. Such models allow us to synthesize non-stationary textures while controlling the spectral, geometric and statistical properties of the simulated fields. These models are parameterized by their co-variance kernel, which can account for local anisotropy and regularity properties. Disrcretised versions of these models can be regarded as 2D auto-regressive models, obtained using the spectral representation of the differential operator.These various operators are analytically stated in the case of Matrén fields.
We discuss and evaluate the relevance of such non-stationary parametric models compared to non-parametric Gaussian fields. As a real case-study, we consider an application to the texture-based super-resolution of geophysical fields at the ocean surface from multi-modal satellite observations.
November 17, 2014 at 14h30
Invited speaker : Prof. Ercan E Kuruoglu, ISTI (Institute of Information Science and Technology, “A. Faedo”)-CNR, Pisa, Italy & Max Planck Institute for Molecular Genetics, Berlin, Germany Title: Non-Normal/Dynamic Bayesian Networks
Abstract: Gaussian Bayesian Networks have gained popularity in diverse applications ranging from image processing to gene expression modelling. They provide a multidimensional probability model for multivariate data, the parameters of which are estimated generally through numerical Bayesian methods such as MCMC. It is a common experience among practitioners in applied fields, however, that many real data are skewed and impulsive making Gaussian models uncomfortable fits to such data. We demonstrate that the Gaussian networks can be extended to a more general class of Bayesian models namely alpha-Stable Graphical Models. We demonstrate the success of the model on some standard Bayesian test networks and on gene expression data. Another limitation of current applications of Bayesian networks is the assumption of stationary networks which have fixed structure and parameter values. However, many real life data change over time or space. We provide a new methodology based on sequential Monte Carlo (particle filtering) that tracks the changes in a Bayesian network. We present results on a computer vision application and a gene expression modelling problem. An important objective of this seminar is to initiate discussions on how these methods can be applied in other areas such as image processing. Bio: Ercan E. Kuruoglu was born in Ankara, Turkey in 1969. He obtained his BSc and MSc degrees both in Electrical and Electronics Engineering at Bilkent University in 1991 and 1993 and the MPhil and PhD degrees in Information Engineering at the Cambridge University, in the Signal Processing Laboratory, in 1995 and 1998 respectively. Upon graduation from Cambridge, he joined the Xerox Research Center in Cambridge as a permanent member of the Collaborative Multimedia Systems Group. In 2000, he was in INRIA-Sophia Antipolis as an ERCIM fellow. In 2002, he joined ISTI-CNR, Pisa as a permanent member. Since 2006, he is an Associate Professor and Senior Researcher. He was a visiting professor in Georgia Institute of Technology graduate program in Shanghai in 2007 and 2011. He was an 111 Project Foreign Expert of the Chinese Government in 2007-2011 and regularly visited Shanghai Jiao Tong University. He was a recipient of the Alexander von Humboldt Foundation Fellowship (2012-2014) and spent his sabbatical stay in Max Planck Institute for Molecular Genetics, Berlin. He was an Associate Editor for IEEE Transactions on Signal Processing in 2002-2006 and for IEEE Transactions on Image Processing in 2005-2009. He is currently the Editor in Chief of Digital Signal Processing. He was the Technical co-Chair for EUSIPCO 2006 and the tutorials co-chair of ICASSP 2014. He served as an elected member of the IEEE Technical Committee on Signal Processing Theory and Methods (2004-2010), was a member of IEEE Ethics committee and is a Senior Member of IEEE. He is the author of more than 100 peer reviewed publications and holds 5 US, European and Japanese patents. His research interests are in statistical signal processing and information and coding theory with applications in image processing, astronomy, geophysics, telecommunications, computational biology and chemistry. Location: Coriolis
October 23, 2014 at 14h30
Invited speaker : Prof. B. S. Manjunath, University of California, Santa Barbara, USA Title: Scalable Scientific Image Informatics
Abstract: Recent advances in microscopy imaging, image processing and computing technologies enable large scale scientific experiments that generate not only large collections of images and video, but also pose new computing and information processing challenges. These include providing ubiquitous access to images, videos and metadata resources; creating easily accessible image and video analysis, visualizations and workflows; and publishing both data and analysis resources. Further, contextual metadata, such as experimental conditions in biology, are critical for quantitative analysis. Streamlining collaborative efforts across distributed research teams with online virtual environments will improve scientific productivity, enhance understanding of complex phenomena and allow a growing number of researchers to quantify conditions based on image evidence that so far have remained subjective. This talk will focus on recent work in my group on image segmentation and quantification, followed by a detailed description of the BisQue platform. BisQue (Bio-Image Semantic Query and Environment) is an open-source platform for integrating image collections, metadata, analysis, visualization and database methods for querying and search. We have developed new techniques for managing user-defined datamodels for biological datasets, including experimental protocols, images, and analysis. BisQue is currently used in many laboratories around the world and is integrated into the iPlant cyber-infrastructure (see http://www.iplantcollaborative.org) which serves the plant biology community. For more information on BisQue see http://www.bioimage.ucsb.edu. Bio: B. S. Manjunath received the B.E. degree (with distinction) in electronics from Bangalore University, Bangalore, India, in 1985, the M.E. degree (with distinction) in systems science and automation from the Indian Institute of Science, Bangalore, in 1987, and the Ph.D. degree in electrical engineering from University of Southern California, Los Angeles, in 1991. He is a Professor of electrical and computer engineering, Director of the Center for Bio-Image Informatics, and Director of the newly established Center on Multimodal Big Data Science and Healthcare at the University of California, Santa Barbara. His current research interests include image processing, distributed processing in camera networks, data hiding, multimedia databases, and Bio-image informatics. He has published over 250 peer-reviewed articles on these topics and is a co-editor of the book Introduction to MPEG-7 (Wiley, 2002). He was an associate editor of the IEEE Transactions on Image Processing, Pattern Analysis and Machine Intelligence, Multimedia, Information Forensics, the IEEE Signal Processing Letters and is currently an AE for the BMC Bio Informatics Journal. He is a co-author of the paper that won the 2013 Transactions on Multimedia best paper award and is a fellow of the IEEE. Location: Coriolis
September 15, 2014 at 14h30
Invited speaker : Samir Sahli, Ph.D,
McMaster University, Canada
Title: Automatic detection of vehicle in large scale aerial images
Abstract: Over the years, the detection of vehicles in large scale aerial imagery has received great attention especially with the recent development of Unmanned Aerial Vehicle (UAVs) for military and civilian applications.
The automatic detection of vehicle remains a difficult task for several reasons. First, vehicles could heavily interfere with their immediate environment, producing occlusions and shadow areas. Therefore, the visual aspect of vehicle’s main body parts (e.g. rooftop, hood, and windshield) can be drastically altered. In the context of uncontrolled environment and sub-optimal acquisition conditions, no prior information about the scene, the used sensor, the time of day or even the viewpoints from which the images were taken are known. Finally, vehicles are usually considered as small objects in context of aerial imagery. They are dispersed in the scene that make them more difficult to detect. All these constraints motive the development of robust and automatic methods able to detect vehicles in aerial images.
To tackle this problems, we proposed a set of algorithms combining different tools such as the extraction of image feature, Support Vector Machines for classification and unsupervised clustering algorithm (Affinity Propagation under spatial constraint) within a feature-level fusion scheme. Over-segmentation technique and salient region detection algorithm have been explored as a strategy to screen the image content.
The proposed algorithms have been tested on the image databases provided by Defence Research and Development Canada laboratories (DRDC) and have proven their efficiency and their robustness to detect vehicles in urban and non-urban environment.
Bio: Samir Sahli was awarded the B.Sc. degree in Applied Mathematics and Information Sciences from University of Nice Sophia-Antipolis in 2004. He received the M.Sc and Ph.D. degrees in Physics (speciality Optics/Photonics/Image Science) from University Laval, Quebec, Canada, in 2008 and 2013, respectively. During his graduated studies, Dr. Sahli has worked with Defence Research and Development Canada (DRDC) on the automatic detection and recognition of targets in aerial imagery, especially in the context of uncontrolled environment and sub-optimal acquisition conditions. He acted since 2009 as consultant for several companies based in Europe and North-America specialized in the area of Intelligence, Surveillance and Reconnaissance (ISR) and in Remote Sensing.
Dr. Sahli joined the laboratory of Biophotonics at McMaster University in 2013, as Postdoctoral Fellow. His current research are in optics, image processing as well as in machine learning. He is involved in several projects such as the development of a novel generation of Gastrointestinal tract imaging device; the hyperspectral imaging of skin erythema for individualized radiotherapy treatment and; automatic detection of precancerous Barrett’s esophageal cell using fluorescence lifetime imaging microscopy, multiphoton microscopy and machine learning.
July 23, 2014 at 14h30
Invited speaker : Prof. Zoltan Kato,
Szeged University, Hungary
Title: Geometric priors for Markov Random Fields
Abstract: Object extraction remains one of the key problems of computer vision, which can be stated as finding regions in the image domain occupied by a specified object or objects. The solution often requires high-level knowledge about the shape of the objects. HOAC models integrate shape knowledge via the inclusion of explicit longrange dependencies between region boundary points. A subsequent reformulation of HOAC models as phase fields can be interpreted as real-valued continuum Markov random fields. Discretizing the phase field ‘gas of circles’ (GOC) model, we will develop an equivalent GOC Markov random field model that assigns high probability to regions in the image domain consisting of an unknown number of circles of a given radius. The MRF model is constructed in a principled way, thereby creating an equivalent MRF.
A major limitation of the ’gas of circles’ model is that touching or overlapping objects cannot be represented. A generalization of the original GOC model that overcomes these limitations while maintaining computational efficiency is the multi-layer phase field GOC model. The Markovian formulation yields a multilayer binary Markov random field model that assigns high probability to object configurations in the image domain consisting of an unknown number of possibly touching or overlapping near-circular objects of approximately a given size. Each layer has an associated binary random field that specifies a region corresponding to objects. Overlapping objects are represented by regions in different layers. Within each layer, longrange clique potentials favor connected components of approximately circular shape, while regions in different layers that overlap are penalized through inter-layer cliques.
The proposed GOC MRF models can be used as a prior for object extraction whenever the objects conform to the ‘gas of circles’ geometry, e.g. tree crowns in aerial images or cells in biological images.
[This work has been partially supported by the European Union and the State of Hungary, co-financed by the European Social Fund through project TAMOP–4.2.4.A/2-11-1-2012–0001 National Excellence Program.]
Bio: Zoltan Kato received the BS and MS degrees in computer science from the Jozsef Attila University, Szeged, Hungary in 1988 and 1990, and the PhD degree from University of Nice doing his research at INRIA – Sophia Antipolis, France in 1994. Since then, he has been a visiting research associate at the Computer Science Department of the Hong Kong University of Science & Technology; an ERCIM postdoc fellow at CWI, Amsterdam; and a visiting fellow at the School of Computing, National University of Singapore. In 2002, he joined the Institute of Informatics, University of Szeged, Hungary, where he is heading the Department of Image Processing and Computer Graphics. His research interests include image segmentation, registration, shape matching, statistical image models, Markov random fields, color, texture, motion, shape modeling, variational and level set methods. He has served on several program committees of major conferences (e.g. Area Chair for ICIP 2008, 2009) and has been an Associate Editor for IEEE Transactions on Image Processing.
He is the President of the Hungarian Association for Image Processing and Pattern Recognition (KEPAF) and a Senior Member of IEEE.
June 30, 2014 at 14h30
Invited speaker : Dr. Michal Haindl,
Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic
Title: Unsupervised texture segmentation
Abstract: A visual appearance of natural materials significantly depends on acquisition circumstances, particularly illumination conditions and viewpoint position, whose variations cause difficulties in the analysis of real scenes.
We address this issue in the framework of unsupervised segmentation of static and dynamic textures. Textural features, based on fast estimates of Markovian statistics, that are simultaneously rotation and illumination invariant will be discussed. The proposed features are invariant to in-plane material rotation and illumination spectrum
(colour invariance), they are robust to local intensity changes (cast shadows) and illumination direction. No knowledge of illumination conditions is required and recognition is possible from a single training image per material.
The material recognition is tested on the currently most realistic visual representation – Bidirectional Texture Function (BTF), using CUReT and ALOT texture datasets with more than 250 natural materials. Our proposed features significantly outperform leading alternatives including Local Binary Patterns (LB, LB-HF) and texton MR8 methods.
Finally we will discuss textural segmenters performance verification based on the Prague texture segmentation data-generator and benchmark and its recent modifications.
Bio: To be provided. Location: Coriolis
May 26, 2014 at 14h30
Invited speaker : Guillaume Tartavel,
LTCI lab of Telecom-ParisTech
Title: Variational Texture Synthesis: Combining Patch Sparsity and Fourier Spectrum
Abstract: In this talk, I will present a variational texture framework based on patch sparsity and Fourier spectrum constraints. The texture synthesis problem consists in generating several images with the same visual appearance as an input (the exemplar) without being strictly identical. We propose a variational approach: we define a cost function to measure the similarity of certain statistics from the exemplar texture to any other image. This function takes into account color distribution, spectrum distribution, and an approximation error using a sparse patches decomposition into a learned dictionary. A texture with similar statistics to the exemplar has a low cost, whereas one with very different statistics has a large cost. The goal is to find an image with a low cost: the synthesis problem is thus reduced to an optimization problem, for which we propose an algorithm. Bio: I am a 3rd-year Ph.D. student in the LTCI lab of Telecom-ParisTech with Yann Gousseau (LTCI) and Gabriel Peyré (Ceremade, Université Paris-Dauphine). My subject is texture modeling: I have worked on texture synthesis during the first half of my Ph.D. and I am currently working on texture-preserving denoising methods. I received the M.Sc. degree from ENS Cachan, majoring in image processing, and the Engineer diploma from Telecom ParisTech specializing in image processing and computer science.
I am now looking towards post-doctorate research, either in the academic field or as an R&D researcher.
March 3, 2014 at 14h30
Invited speaker : Dr. Rémi Flamary,
Maitre de Conférence à l’Université de Nice dans le laboratoire Lagrange, France
Title: Learning with infinitely many features
Abstract: We propose a principled framework for learning with infinitely many features, situations that are usually induced by continuously parametrized feature extraction methods. Such cases occur for instance when considering Gabor-based features in computer vision problems or when dealing with Fourier features for kernel approximations.
We cast the problem as the one of finding a finite subset of features that minimizes a regularized empirical risk. After having analyzed the optimality conditions of such a problem, we propose a simple algorithm which has the flavour of a column-generation technique. We also show that using Fourier-based features, it is possible to perform approximate infinite kernel learning.
Our experimental results on several datasets show the benefits of the proposed approach in several situations including texture classification and large-scale kernelized problems (involving about 100 thousand examples).
Bio: Rémi Flamary is Assistant Professor at Université de Nice Sophia-Antipoles and a member of Lagrange Laboratory/Observatoire de la Côte d’Azur since 2012. He received a Dipl.-Ing. in electrical engineering and a M.S. degrees in image processing from the Institut National de Sciences Appliquées de Lyon in 2008 and a Ph.D. degree from the University of Rouen in 2011. His current research interest involve signal processing, machine learning and image processing. Location: Coriolis
February 10, 2014 at 14h30
Invited speaker : Dr. Pierre Couteron,
IRD – UMR AMAP, France
Title: Texture Analysis of Very High Spatial Resolution Satellite Images as a Way to Monitor Vegetation and Forest Biomass in the Tropics
Abstract: Space observation is acknowledged as quintessential to provide monitoring strategies for vegetation at multiple scales over extensive territories of low population and limited accessibility. Optical satellite imagery represents the major data source and covers an ample continuum of image resolution and swath. Yet vegetation monitoring in both the dry and wet tropics has long been hampered by insufficient pixel resolution that renders inefficient the well-mastered, pixelwise classification techniques. The increasing availability of images of very high spatial resolution (VHSR, pixels of less than 1 m) has opened new prospects by allowing the inference of vegetation properties from image texture features (i.e., local inter-pixel variability). In this presentation I aim to illustrate this potential through recently published cases studies of vegetation monitoring with a special emphasis on above-ground biomass assessment in tropical forests. To quantify textural features of forest canopy, we use the FOTO method (Fourier-based textural ordination) applied on panchromatic VHSR satellite images. Those features can generally be related to meaningful vegetation characteristics and notably to tree crown size distribution and sometimes to inter-crown gaps. They also are good predictors of above-ground forest biomass thanks to allometric equations that relate together size and biomass of the different parts of a tree. The modelling of 3D forest structure that allows generating virtual canopy images (using the DART radiative transfer model developed at Cesbio – Toulouse) is a central tool to ascertain the validity of biomass prediction in various contexts and to explore the source of errors due to image acquisition conditions (scene geometry) and tree shape variability. Bio: Pierre Couteron is Directeur de Recherche at the Institute de Recherche for Development (IRD) and has a large experience of tropical vegetation in various contexts (African Sahel, Congo Basin, India). He is directly involved in researches at the interface between vegetation studies, modeling and spatial observation. He is Head of the AMAP lab in Montpellier, which is devoted to the study and modeling of plants and vegetation. Location: Coriolis
January 27, 2014 at 14h30
Invited speaker : Dr. Yohann Tendero,
Assistant Adjunct Professor of Mathematics at UCLA, USA
Title: On the Foundations of Computational Photography: Theory and Practice
Abstract: Until recently moving objects could only be photographed with short exposure times, to avoid motion blur. Yet, recently two groundbreaking works in computational photography offer new camera designs allowing arbitrary exposure times. The “flutter shutter” of Agrawal et al. creates an invertible motion blur by using a clever shutter technique to interrupt the photon flux during the exposure time according to a well chosen binary sequence. The “motion-invariant photography” of Levin et al. gets the same result by a uniformly accelerated camera motion. This talk proposes a simple mathematical method for evaluating the image quality of these new cameras. The theory, providing explicit formulas for the SNR obtained after deconvolution, raises a central paradox for these cameras: It shows that even an infinite exposure time cannot bring an SNR increase of more than 17%! Nevertheless, three consequences of the theory permit to mitigate this harsh limitation. First, this SNR gain can be obtained on any video with moderate motion blur by a very simple new video temporal filter. The video improvement by this blind deconvolution is visible. Second, we show that if a probabilistic motion model is available, then one can compute an optimal flutter shutter with SNR exceeding significantly the predicted limit. Third, we show that the “best snapshot’’ for a given exposure time is not obtained with a constant aperture: it is obtained with a flutter shutter. Bio: Yohann Tendero obtained is Ph.D. in 2012 from ENS-Cachan.
He is an expert on the mathematical aspects of computational photography. In 2013 he was awarded by the Hadamard foundation.
In 2014 he will organize a workshop on computational photography at IPAM (Los Angeles). He is now a post-doctoral scholar at UCLA with S. Osher.
January 20, 2014 at 14h30
Invited speaker : Dr. Csaba Benedek,
Institute for Computer Science and Control of the Hungarian Academy of Sciences, Hungary
Title: An Embedded Marked Point Process Framework for Multilevel Object Population Analysis
Abstract: In this talk, I introduce a probabilistic approach for extracting complex hierarchical object structures from various digital images used by machine vision applications. The proposed framework extends conventional Marked Point Process models by (i) admitting object-subobject ensembles in parent-child relationships and (ii) allowing corresponding objects to form coherent object groups, by a Bayesian segmentation of the population. A global optimization process attempts to find the optimal configuration of entities, considering the observed data, prior knowledge, and interactions between the neighboring and the hierarchically related objects. The proposed method is demonstrated in three different application areas: optical circuit inspection, built in area analysis in remotely sensed images, and traffic monitoring on airborne Lidar data. Bio: Csaba Benedek is a senior research fellow with the Distributed Events Analysis Research Laboratory, at the Institute for Computer Science and Control of the Hungarian Academy of Sciences. Previously, he worked as a postdoctoral researcher with the Ariana Project Team at INRIA Sophia-Antipolis, France. He was the national project leader of the Array Passive ISAR adaptive processing project funded by the European Defense Agency. Currently he is the project leader of the i4D Project of MTA SZTAKI and the DUSIREF project funded by the European Space Agency. His research interests include image segmentation and object extraction, change detection, scene recognition and reconstruction from Lidar pointclouds and remotely sensed data analysis. Location: Coriolis
December 2, 2013 at 14h30
Invited speaker : Prof. Arnaud Doucet,
Department of Statistics, Oxford University, UK
Title: The expected auxiliary variable method for Monte Carlo inference
Abstract: The expected auxiliary variable method is a general framework for Monte Carlo simulation in situations where the target distribution of interest is intractable thus preventing the implementation of classical methods. The method finds application in situations where marginal computations are of interest, transdimensional move design is difficult in model selection setups, when the normalising constant of a particular distribution is unknown but required for exact computations. I will present several examples of applications of this principle as well as some theoretical results that we have recently obtained in some scenarios. Bio: Arnaud Doucet received his PhD degree in Statistical Signal Processing from the University Paris XI (Orsay) in 1997. He has held previously faculty positions at the University of Melbourne, the University of Cambridge, the University of British Columbia and the Institute of Statistical Mathematics in Tokyo. Since 2010 he is Professor in the Department of Statistics of Oxford University.
His research interests include Monte Carlo methods and Bayesian inference.
November 25, 2013 at 14h30
Invited speaker : Dr. Mohamed Naouai,
Assistant Prof. at the University of Tunis El Manar, Tunisia
Title: Localization and reconstruction of the road network by VHR images’ vectorisation and approximation using “NURBS “constraints
Abstract: The extraction of road networks from aerial or satellite images has been and is still the subject of much research and many methods are addressing this problem. Indeed, this is an important issue especially that mapping the surface and updating existing maps is hard and timely expensive. Despite this fact, the extraction of road networks today remains a challenge because of the great variability of the objects involved, and therefore is difficult to characterize. In this context, I will present two approaches to locate roads. The first one is based on the process of converting the image into a vector form. The originality of this approach lies in the use of a geometric method to ensure the shift into a vector representation of the original image and the establishment of a logical formalism based on a set of perceptual criteria. It allows the filtering of unnecessary information and extracting linear structures. In the second approach, I will present an algorithm based on the wavelet theory, it particularly highlights the use of both multi-resolution and multi-direction systems. Thus, we introduce a road localization approach, which manages the frequency multidirectional data resulting from the transform using the Log-Gabor wavelet.
In the localization step, I will present two road detectors, which are capable of exploiting the radiometric, geometric and frequency data. Despite that, these data cannot allow accurate and precise results. To overcome this drawback, a tracking algorithm is needed. I will present the reconstruction of road networks by NURBS curves. This approach is based on a landmark set of points identified in the localization phase and presents a new concept, noted by NURBSC. NURBSC is using the geometrical constraints of shapes to be approximated. I connect road segments identified in order to obtain continuous road network.
Location: Euler Violet
September 16, 2013 at 14h30
Invited speaker : Prof. Caroline Chaux,
Laboratoire d’Analyse, Topologie, Probabilités, Aix-Marseille Université, France
Title: Mixed discrete/continuous optimization approaches for Poisson-Gaussian noise parameter estimation
Abstract: We are interested here in MACROscopy images where the Poisson-Gaussian model is well suited due to low photon count and high detector noise. In the literature, researchers have considered either the Poisson model (accounting for signal dependent noise as e.g. photon noise) or the Gaussian model (accounting for signal independent noise as e.g. electric noise, thermal noise etc.). However, these models have been shown to be too simple and thus, more recently, the sum of Poisson and Gaussian models was proposed
to better fit real data.
In this work, we propose to estimate the Poisson-Gaussian noise parameters based on two realistic scenarios:
i) one from time series images, taking into account bleaching effects, and ii) another from a single image.
These estimations are grounded on the use of an Expectation-Maximization (EM) approach associated to mixed discrete-continuous optimization strategies (proximal methods, combinatorial optimization techniques).
Bio: Caroline Chaux (32 years old, Aix Marseille Université, CNRS, Centrale Marseille, LATP, UMR 7353, 13453 Marseille France): Engineer in telecommunications from the Institut des Sciences de l’Ingénieur de Toulon et du Var (ISITV), France, she received the DEA degree in Signal and Digital Communications from the Université de Nice Sophia-Antipolis, France in 2003. In 2006, she then received the PhD degree in signal and image processing from Université Paris-Est (Laboratoire d’informatique Gaspard Monge UMR-CNRS 8049), France, 2006. In 2006-07, she was post-doctoral fellow with the ARIANA research group (INRIA Sophia-Antipolis Méditerranée) before being appointed the same year by CNRS as research scientist in the Laboratoire d’Informatique (UMR-CNRS 8049) of the Université Paris-Est. In 2012, she has moved to the Laboratoire d’Analyse, Topologie, Probabilités of Aix-Marseille Université. In 2005, she received the best student paper award at the IEEE ICASSP conference and in 2008, she was awarded as the best PhD thesis from club EEA (signal/image section). Location: Coriolis
July 29, 2013 at 14h30
Invited speaker : Prof. Jon Yngve Hardeberg,
Gjøvik University College, Gjøvik, Norway
Title: Spectral imaging of fine art paintings
Abstract: Hyperspectral imaging of fine art paintings has opened up new possibilities for their analysis, visualisation, conservation, and documentation, and allows to investigate the paintings scientifically in a more precise way than other existing imaging and measurement techniques. In this talk we first give a review of the use of imaging technology for cultural heritage applications. We then focus on our recent hyperspectral image acquisition and analysis of the painting “The Scream” painted in 1893 by Edvard Munch. This work has been done in collaboration with researchers from the National Museum of Art, Architecture and Design (Oslo, Norway) and Norsk Elektro Optikk AS (Lørenskog, Norway).
The spectral reflectance has been recorded using the hyperspectral imaging systems HySpex VNIR-1600 and HySpex SWIR-320m-e. The VNIR-1600 camera generates 160 spectral bands in the visible and near infrared (VNIR) region of 400 to 1000 nm with a spatial resolution on the painting of ~0.2 mm (for the whole painting) and 0.06 mm (for a subset). The SWIR-320m-e camera produces 256 bands in the shortwave infrared (SWIR) region of 1000 to 2500 nm with a spatial resolution of 0.29 mm for the whole painting. An accurately controlled translator stage moves the camera and illumination sources to cover the entire painting surface of size 91 x 73.5 cm. The image spectrum has been analysed during the acquisition and the camera parameters has been optimised for signal-to-noise ratio. Lighting levels have been controlled and polarising filters were employed to avoid specular reflections from the painting surface. We have simultaneously captured a calibrated grey reflectance reference and Macbeth Colorchecker as reference data for normalization and conversion to spectral reflectance.
After having explained the setup and calibration we move on to presenting our approaches to visualizing and analysing the data, including extracting hidden information using Independent Component Analysis and identifying the used pigments using Spectral Correlation Mapper. Conclusions are drawn and directions for further research are discussed.
Bio: Jon Y. Hardeberg received his sivilingeniør (MSc) degree in signal processing from the Norwegian Institute of Technology in Trondheim, Norway in 1995, and his PhD from Ecole Nationale Supérieure des Télécommunications in Paris, France in 1999. After a short but extremely valuable industry career near Seattle, Washington, where he designed, implemented, and evaluated colour imaging system solutions for multifunction peripherals and other imaging devices and systems, he joined Gjøvik University College (GUC, http://www.hig.no) in Gjøvik, Norway, in 2001.
He is currently Professor of Colour Imaging at GUC’s Faculty of Computer Science and Media Technology, and member of the Norwegian Colour and Visual Computing Laboratory (http://www.colourlab.no), where he teaches, supervises MSc and PhD students, and researches in the field of colour imaging. His current research interests include multispectral colour imaging, print and image quality, colorimetric device characterisation, colour management, and cultural heritage imaging, and he has co-authored more than 150 publications within the field.
His professional memberships include IS&T (Society for Imaging Science and Technology), SPIE (the international society for optics and photonics), and ISCC (The Inter-Society Color Council). He is GUC’s representative in iarigai (The International Association of Research Organizations for the Information, Media and Graphic Arts industries), and the Norwegian delegate to Division 8 of the CIE (International Commission on Illumination).
He is currently project co-ordinator for an EU project (Marie Curie ITN CP7.0, http://www.cp70.org), project leader for a large research project funded by the Research Council of Norway (HyPerCept), GUC’s representative in the management committee of the Erasmus Mundus Master Course CIMET (Colour in Informatics and Media Technology, http://www.master-erasmusmundus-color.eu/), and Norway’s Management Committee member in the COST Action COSCH (Colour and Space in Cultural Heritage, http://cosch.info). He is co-founder and chair of Forum Farge, Norway’s new interdisciplinary colour association.
Location: Euler Bleu
July 22, 2013 at 14h30
Invited speaker : Prof. Zoltan Kato,
Szeged University, Hungary
Title: A unifying framework for correspondence-less shape alignment and its medical applications
Abstract: We consider the estimation of diffeomorphic transformations aligning a known shape and its distorted observation. The classical way to solve this registration problem is to find correspondences between the shapes and then compute the transformation parameters from these landmarks. Here we propose a novel framework where the exact transformation is obtained as the solution of a polynomial system of equations. The method has been applied to 2D and 3D medical image registration, industrial inspection, planar homography estimation, etc… and its robustness has also been demonstrated. The advantage of the proposed solution is that it is fast, easy to implement, has linear time complexity, works without established correspondences and provides an exact solution regardless of the magnitude of transformation. Bio: Zoltan Kato received the BS and MS degrees in computer science from the Jozsef Attila University, Szeged, Hungary in 1988 and 1990, and the PhD degree from University of Nice doing his research at INRIA – Sophia Antipolis, France in 1994. Since then, he has been a visiting research associate at the Computer Science Department of the Hong Kong University of Science & Technology; an ERCIM postdoc fellow at CWI, Amsterdam; and a visiting fellow at the School of Computing, National University of Singapore. In 2002, he joined the Institute of Informatics, University of Szeged, Hungary, where he is heading the Department of Image Processing and Computer Graphics. His research interests include image segmentation, registration, shape matching, statistical image models, Markov random fields, color, texture, motion, shape modeling, variational and level set methods. He has served on several program committees of major conferences (e.g. Area Chair for ICIP 2008, 2009) and has been an Associate Editor for IEEE Transactions on Image Processing.
He is the President of the Hungarian Association for Image Processing and Pattern Recognition (KEPAF) and a Senior Member of IEEE.
Location: Euler Bleu
June 11, 2013
Invited speaker : Prof. Grégoire Mercier,
Institut Mines-Telecom / Telecom Bretagne, France
Title: Remote Sensing image processing with missing data Abstract: In this talk, the processing of a time series affected by missing data is investigated.
On the hand, we consider a large amount of data (basically a set of registered optical images) with missing samples (pixels affected by the presence of clouds or shadow during the acquisition). A processing scheme based on the Kohonen map is presented to recover missing data. The self organizing properties of the Kohonen map allow the processing of data with missing value. Usually, the missing value has to be flagged to perform an appropriate processing. Here we present the use of sparse distances that have the capability to deal with outliers in the time series.
On the other hand, we consider a couple of 2 co-registred images where the first is used as a priori knowledge and the letter as an image containing missing data to be recovered. Here we consider the compressive sensing capability to reconstruct an entire signal from a very limited number of sample. In that case, the first image is used to learn the structural dependencies in the compressive sensing formulation. When applied to the image affected by missing values, the structural dependencies are extracted from the first image and applied to the (valid) sample of the latter. It performs the reconstruction of the missing area.
Illustration will be given with MODIS image time series and QuickBird high-resolution images.
Bio: Grégoire Mercier was born in France in 1971. He received the Engineer Degree from the Institut National des Télécommunications, Evry, France in 1993, his Ph.D. degree from the University of Rennes I, Rennes, France in 1999 and his Habilitation à Diriger des Recherches from the University of Rennes I in 2007. Since 1999, he has been with the Ecole Nationale Supérieure des Télécommunications de Bretagne, where he has been an Associate Professor in the Image and Information Processing department (ITI). He is now Professor since early 2010. His research interests are in remote sensing image compression and segmentation, especially in hyperspectral and Synthetic Aperture Radar. Actually, his research is dedicated to change detection and combating pollution. He was a visiting researcher at DIBE (University of Genoa, Italy) from March to May 2006 where he developed change detection technique for heterogeneous data. He was also a visiting researcher at CNES (France) from April to June 2007 to take part of the Orfeo Toolbox development. Since 2012, he is an external collaborator of the AYIN research group of the INRIA. Prof. Grégoire Mercier was President of the French Chapter of IEEE Geoscience and Remote Sensing Society from 2010 to 2013. He is an Associate Editor for the IEEE Geoscience and Remote Sensing Letters. Location: To be announced
May 27, 2013 at 14h30
Invited speaker : Prof. Qiyin Fang,
Department of Engineering Physics & School of Biomedical Engineering, McMaster University, Canada
Title: Hyperspectral and fluorescence lifetime imaging technology development and their applications in biomedicine Abstract: One of the primary driving forces behind today’s healthcare is technology advances in diagnostic imaging, minimally-invasive tools, and drug discovery. Hyperspectral imaging (HSI) & fluorescence lifetime imaging (FLIM) are two areas where advanced optical imaging technology can provide both morphological and functional information, critically required in clinical decision making and reduce drug discovery cost. Our recent work focused on two main barriers in translating HSI and FLIM technology to bedside instruments: 1) real-time and high throughput data acquisition required to be compatible with clinical procedures; and 2) miniaturization of key components. We introduced a number of novel techniques to multiplexing the data acquisition process to spectrally-temporally-spatially resolved optical signal and demonstrated the feasibility in minimally invasive medical diagnosis and high content cancer drug screening. Bio: Qiyin Fang is currently an Associate Professor at McMaster University and holds the Canada Research Chair in Biophotonics.
Prior to joining McMaster, Dr. Fang was with the Minimally Invasive Surgical Technology Institute of Cedars-Sinai Medical Center in Los Angeles. Dr. Fang obtained his BSc (Physics) from Nankai University, his MSc (applied physics) and PhD (Biomedical Physics) from East Carolina University. His current research interests include optical spectroscopy and image guided minimally invasive diagnostic and therapeutic devices, miniaturized MOEMS sensors and imaging systems, and advanced optical microscopy and their emerging applications.
Dr. Fang is a senior member and visiting lecturer of SPIE.
April 8, 2013
Invited speaker : Prof. David Windridge,
CVSSP, University of Surrey, UK
Title: A Neutral Point Method for Kernel-Based Combination of Disjoint Training Data in Multi-Modal Pattern Recognition Abstract: In multimodal information fusion domains, such as remote sensing, it is not uncommon to encounter objects with one or more missing modality for which combination cannot be performed. This is particularly problematic for kernel-based fusion, where objects themselves define the embedding space, making conventional methods for dealing with missing modality information (such as mean-substitution) inapplicable. However, by interpreting the aggregate of disjoint training sets as a complete data set with missing inter-modality kernel measurements to be filled in by appropriately chosen substitutes, a novel kernel-based technique, the neutral-point method is derived. Missing modalities are thus substituted in manner that is unbiased with regards to the overall classification. Critically, unlike conventional missing-data substitution methods, explicit calculation of neutral points may be omitted by virtue of their implicit incorporation within the in the SVM training framework.Experiments based on the publicly available Biosecure DS2 multimodal data set show that the SVM-NPS approach achieves very good generalization performance since the method is, in structural terms, a kernel-based analog of the well-known sum rule combination scheme exhibiting similar error-cancelling behaviour. Bio: David Windridge (B.Sc. (Hons), M.Sc., Ph.D.) is a Senior Research Fellow at the CVSSP, University of Surrey, UK with research interests in Multiple Classifiers Systems, Kernel Methods and Cognitive Systems (and a former interest in Observational Cosmology). He has authored and played a leading role on a range of machine learning projects (including, most recently, EPSRC ACASVA and EU FP7 DIPLECS), as well as a number of industrial and academic collaborations, including EU INTAS PRINCESS. His early career commenced in industrial electro-acoustic research, which was followed by a research-based M.Sc in Radio-Astronomy, leading to a Ph.D in Statistical Cosmology (Univ. of Bristol, UK). This led to his current position at the CVSSP (Univ. of Surrey, UK), following which his remit expanded to include the areas of pattern-recognition and cognitive systems. He has authored more than 60 peer-reviewed publications. Location: Coriolis
March 11, 2013 at 14h30
Invited speaker : Nazre Batool,
University of Maryland, USA
Title: Analysis of wrinkles in Aging Human Faces for Detection and Biometric Applications Abstract: Analysis and modeling of aging human faces have been done extensively in the past several years resulting in numerous applications. Most of this research work is based on learning techniques focused on the appearance of faces at different ages incorporating facial features (e.g. shapes and patch based textures). However, we do not find much work done on the analysis of facial wrinkles in general or of those specific to a person. In this talk, I will present my recent work on analysis and modeling of facial wrinkles specifically for different applications. Facial wrinkles are challenging low-level image features to analyze. A skin patch looks very different when viewed or illuminated from different angles due to the physical properties of skin. This makes subtle skin features like facial wrinkles difficult to be detected in images acquired in uncontrolled imaging settings. In my work the image properties of wrinkles (i.e. intensity gradients and geometric properties), are investigated and used for detection of wrinkles and their use as a soft biometrics.
First, I will present the results of detection/localization of wrinkles in images using the Marked Point Process. Wrinkles are modeled as sequences of line segments in a Bayesian framework where a prior probability model is based on the likely geometric properties of wrinkles and a data likelihood term is based on image intensity gradients. Wrinkles are localized by sampling the posterior probability using a Reversible Jump Markov Chain Monte Carlo algorithm. Then I will present the results of our recent investigation of the user drawn and automatically detected wrinkles as a pattern for their discriminative power as a soft biometrics to recognize humans from their wrinkle patterns only. A set of facial wrinkles from an image is treated as a curve pattern and used for subject recognition. Given the wrinkle patterns from a query and a gallery image, several distance measures are proposed based on Hausdorff distance and curve-to-curve correspondences to quantify the similarity between them.
The results of experiments on images with variable resolution and acquired in uncontrolled settings are quite promising. It is possible not only to detect wrinkles automatically from images but also to use them as soft biometrics to recognize people.
Bio: Ms. Nazre Batool is currently a Ph.D. candidate in Electrical and Computer Engineering at University of Maryland, College Park, USA. She is conducting her doctoral research on modeling and analysis of facial skin wrinkles at the Center for Automation Research (CfAR) under the supervision of Prof. Rama Chellappa. She is a recipient of the prestigious Ph.D. Fulbright five years’ award from Pakistan. She received her B.Sc. degree in Electrical Engineering from the University of Engineering and Technology, Lahore, Pakistan in 2004 and M.Sc. degree in Electrical and Electronics Engineering from Universiti Teknologi PETRONAS (UTP), Tronoh, Perak, Malaysia in 2008. During her Master’s research she was a major part of the collaboration between the dermatology department, General Hospital, Kuala Lumpur and Intelligent Imaging & Telemedicine Laboratory, UTP, Malaysia. Her Master’s thesis was on analysis and modeling of 3D skin surface textures of healthy skin and skin lesions using the Markov-Gibbs random field modeling approach. Her main research interests are computer vision, statistical modeling techniques and digital image processing. Location: Lagrange Gris
February 18, 2013
Invited speaker : Prof. Daniel Racoceanu,
CNRS et University Pierre et Marie Curie, National University of Singapore, Image & Pervasive Access Lab – IPAL, UMI CNRS, Singapore
Title: Symbolic approaches using prior knowledge and prior shapes for high content microscopic images exploration. Roadmap and perspectives inspired by a long-run french-singaporean collaboration Abstract: The presentation will be structured in three parts (about 35′ + QA) :
Short presentation of the IPAL international CNRS joint lab (Image & Pervasive Access Lab – UMI CNRS 2955)
URL : http://ipal.i2r.a-star.edu.sg/
Main topic (20′) :
Cognitive virtual microscopy for breast cancer grading: whole slide image exploration in histopathology using a symbolic cognitive vision approach
Framework: ANR TecSan MICO project – THALES, AGFA Halthcare, TRIBVN, Pitié-Salpetrière Paris
URL : http://ipal.cnrs.fr/project/mico
Quick overview (10′):
Shape and texture prior for 2D/3D analysis/synthesis of neural stem cells for fate prediction, 2D differentiation and 3D reconstruction
Framework: A*STAR JCO project (Joint Council Office, Agency for Science Technology and Research) Singapour – Intelligent vision for neural stem cells for neural stem cells tracking and differentiation
URL : http://ipal.cnrs.fr/project/ivs4nsc
Bio: Professor at the University Pierre and Marie Curie (UPMC) – Sorbonne Universities, Paris and Senior Research Fellow at the French National Center for Scientific Research (CNRS), Daniel Racoceanu is the Director (France) of the International Joint Research Unit IPAL (Image & pervasive Access Lab) – UMI CNRS created in Singapore between the CNRS, the National University of Singapore, the Institute for Infocomm Research of the Agency for Science, Technology and Research (I2R/A*STAR), the UPMC, the University Joseph Fourier and the Institut Mines-Telecom.Ph.D. (1997) and Habil-Dr. (2006) at the University of Besançon, France, he was Project Manager at General Electric Energy Products – Europe, before joining, in 1999, a chair of Associate Professor at the University of Besançon, France.PI of MICO (Cognitive Microscopy) ANR TecSan project involving AGFA Healthcare, THALES, LIP6/UPMC, and the SME TRIBVN, Daniel is involved in the FlexMIm FUI (Single Interministerial Fund – Fonds Unitaires Interministériels – of the French Ministry of Industry – MINEFE) (2013-2016) and the JCO/A*STAR (A*STAR Joint Council Office, Singapore) projects entitled “An Intelligent Vision System for Quantitative Microscopy in Neural Stem Cells Progenitor Growth and Differentiation” (2009-2013) and “A suite of integrated microscopy systems for imaging anatomies of complex 3D cell culture systems” (2013-2016), in collaboration with I2R, SERC/A*STAR (Science and Engineering Council, A*STAR), the Bioinformatics Institute (BII) and the Institute of Medical Biology (IMB), BMRC/A*STAR (BioMedical Research Council, A*STAR).Adjunct Professor at the National University of Singapore since 2009, his fields of interest include cognitive diagnosis/prognosis assistance using semantic medical image analysis and content based medical image retrieval. Location: Coriolis
January 14, 2013
Invited speaker : Prof. Cédric Richard,
Université de Nice Sophia-Antipolis, France
Title: Nonlinear unmixing of hyperspectral data Abstract: Spectral unmixing is an important issue to analyze remotely sensed hyperspectral data. Although the linear mixture model has obvious practical advantages, there are many situations in which it may not be appropriate and could be advantageously replaced by a nonlinear one. In this presentation, we formulate a new kernel-based paradigm that relies on the assumption that the mixing mechanism can be described by a linear mixture of endmember spectra, with additive nonlinear fluctuations defined in a reproducing kernel Hilbert space. This family of models has a clear physical interpretation, and allows to take complex interactions of endmembers into account. We also investigate how to incorporate spatial correlation into a nonlinear abundance estimation process. A nonlinear unmixing algorithm operating in reproducing kernel Hilbert spaces, coupled with a L1-type spatial regularization, is derived. Experiment results, with both synthetic and real hyperspectral images, illustrate the effectiveness of the proposed scheme. Bio: Cédric Richard was born January 24, 1970 in Sarrebourg, France. He received the Dipl.-Ing. and the M.S. degrees in 1994 and the Ph.D. degree in 1998 from the University of Technology of Compiègne, France, all in electrical and computer engineering. From 1999 to 2003, he was an Associate Professor at the University of Technology of Troyes (UTT), France. From 2003 to 2009, he was a Professor in the Institut Charles Delaunay (ICD, CNRS FRE 2848), LM2S Group, at UTT. He was also the supervisor of the LM2S Group. Since september 2009, he is a Professor in the Lagrange Laboratory (University of Nice Sophia-Antipolis, UMR CNRS 7293, Observatoire de la Côte d’Azur). In winter 2009, and autumns 2010 and 2011, he was a Visiting Researcher with the Department of Electrical Engineering, Federal University of Santa Catarina (UFSC), Florianopolis, Brazil, to collaborate with Prof. Jose-Carlos M. Bermudez.
Cédric Richard is a junior member of the Institut Universitaire de France since October 2010. His current research interests include statistical signal processing and machine learning.
Cédric Richard is the author of over 150 papers. he was the General Chair of the XXIth francophone conference GRETSI on Signal and Image Processing that was held in Troyes, France, in 2007, and of the IEEE Statistical Signal Processing Workshop (IEEE SSP’11) that was held in Nice, France, in 2011. Since 2005, he is a member of GRETSI association board and of the EURASIP society, and Senior Member of the IEEE. In 2006-2010, he served as an associate editor of the IEEE Transactions on Signal Processing. Since 2009, he serves as an Associate Editor of Signal Processing Elsevier. He is an Eurasip liaison local officer, and member of the Signal Processing Theory and Methods (SPTM) Technical Committee of the IEEE Signal Processing Society. In 2013, he was also elected member of the Machine Learning for Signal Processing (MLSP) Technical Committee of the IEEE Signal Processing Society.
December 10, 2012
Invited speaker : Prof. Carlos López-Martínez,
Universitat Politecnica de Catalunya Barcelonatech, Spain
Title: The Role of Polarimetry in SAR Remote Sensing Abstract: Polarimetric Synthetic Aperture Radar (PolSAR) is recognised nowadays as a powerful technique for the observation of the Earth surface and the extraction of quantitative geophysical and biophysical information. This importance is supported by current space borne missions like ALOS (L-band), Radarsat-2 (C-band), ENVISAT (C-band) and TerraSAR-X (X-band), future missions like ALOS-2 (L-band), RCM (C-band) or Sentinel-1 (C-band) and even novel mission concepts as Biomass (P-band) or TanDEM-L (L-band).
Polarimetry makes it possible to be sensitive to different properties of the terrain, especially those related with structural characteristics of the observed targets and those depending on the water content. Additionally, the polarimetric diversity by itself is very appealing as the complete polarimetric scattering properties of a target can be obtained from physical measurements at only two orthogonal polarization states. This property, known as polarization synthesis, offers the possibility to explore the complete polarization space with a given objective, for instance, the optimization of a polarimetric observable or parameter. Despite all these advantages, polarimetric SAR data are affected also by the speckle noise component, which needs to be eliminated to make it possible to access to the physical information within the data.
The objective of this talk is to show the present state of SAR polarimetry and how this source of information can be employed for the estimation of useful information. In order to arrive to this end, polarimetric SAR data are normally processed by signal and image processing techniques, but taking into account the limits imposed by the electromagnetic scattering process. Therefore, this signal processing point of view of SAR polarimetry shall be also covered in the presentation.
Bio: Carlos López-Martínez, received the MSc. degree in electrical engineering and the Ph.D. degree from the Universitat Politècnica de Catalunya, Barcelona, Spain, in 1999 and 2003, respectively.
From October 2000 to March 2002, he was with the Frequency and Radar Systems Department, HR, German Aerospace Center, DLR, Oberpfaffenhofen, Germany. From June 2003 to December 2005, he has been with the Image and Remote Sensing Group – SAPHIR Team, in the Institute of Electronics and Telecommunications of Rennes (I.E.T.R. – CNRS UMR 6164), Rennes, France. In January 2006, he joined the Universitat Poltècnica de Catalunya as a Ramón-y-Cajal researcher, Barcelona, Spain, where he is currently associate professor in the area of remote sensing and microwave technology. His research interests include SAR and multidimensional SAR, radar polarimetry, physical parameter inversion, digital signal processing, estimation theory and harmonic analysis.
He is associate editor of IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing and he served as guest editor of the EURASIP Journal on Advances in Signal Processing. He has organized different invited sessions in international conferences on radar and SAR polarimetry. He has presented advanced courses and seminars on radar polarimetry to a wide range of organizations and events. Dr. López-Martínez received the Student Prize Paper Award at the EUSAR 2002 Conference, Cologne, Germany.
Location: To be announced
November 26, 2012
Invited speaker : Prof. Jesus Angulo,
CMM-Centre de Morphologie Mathématique, Départ. Mathématiques et Systèmes, MINES ParisTech, France
Title: Mathematical morphology for hyperspectral images Abstract: Mathematical morphology is a non-linear methodology for image processing based on a pair of adjoint and dual operators, dilation and erosion, used to compute sup/inf-convolutions in local neighborhoods. The extension of morphological operators to multivariate images, and in particular to hyperspectral images, requires the introduction of appropriate vector ordering strategies. In this talk, we illustrate how multivariate statistics and machine learning techniques can be exploited to design vector ordering and to include results of morphological operators in the pipeline of hyperspectral image analysis.
Firstly, we focus on the notion of supervised vector ordering which is based on a supervised learning formulation. A training set for the background and another training set for the foreground are needed as well as a supervised method to construct the ordering mapping. Two particular cases of learning techniques are considered: 1) kriging-based vector ordering and 2) support vector machines-based vector ordering. Analysis of supervised ordering in morphological template matching problems, which corresponds to the extension of hit-or-miss operator to multivariate image by using supervised ordering, will be also briefly illustrated. Secondly, we consider an unsupervised vector ordering based on statistical depth function computed by random projections. We begin by exploring the properties that an image requires to ensure that the ordering and the associated morphological operators can be interpreted in a similar way than in the case of grey scale images. This will lead us to the notion of background/foreground decomposition. Additionally, invariance properties are analyzed and theoretical convergence is showed.
The works presented in the talk are part of the Ph.D. Thesis by Santiago Velasco-Forero, achieved in 2012.
S. Velasco-Forero and J. Angulo. Supervised ordering in Rn: Application to morphological processing of hyperspectral images. IEEE Transactions on Image Processing, Vol. 20, No. 11, 3301-3308, 2011.
S. Velasco-Forero and J. Angulo. Random projection depth for multivariate mathematical morphology. IEEE Journal of Selected Topics in Signal Processing, Vol. 6, Issue 7, 753-763, 2012.
Bio: Jesús Angulo was born in Cuenca, Spain, in 1975. He received a degree in Telecommunications Engineering from Polytechnical University of Valencia, Spain, in 1999, with a Master Thesis on Image and Video Processing. He obtained his PhD in Mathematical Morphology and Image Processing, from the Ecole des Mines de Paris (France), in 2003, under the guidance of Prof. Jean Serra. He got his Habilitation degree (French HDR) from the Université Paris-Est in 2012. He is currently senior researcher (Maître de Recherche) in the Center of Mathematical Morphology (Department of Mathematics and Systems) at MINES ParisTech. His research interests are in the areas of multivariate image processing (colour, hyper/multi-spectral, temporal series, polarimetric, tensor imaging) and mathematical morphology (filtering, segmentation, shape and texture analysis, stochastic and geometry approaches, PDE approaches), and their application to the development of theoretically-sound and high-performance algorithms and software in the fields of Biomedicine/Biotechnology, Remote Sensing and Industrial Vision. Location: To be announced
October 22, 2012 at 2:30pm
Invited speaker : Prof. Johan Debayle,
École Nationale Supérieure des Mines, Saint-Étienne, France
Title: Adaptive image processing and analysis Abstract: The General Adaptive Neighborhood Image Processing (GANIP) is a mathematical framework for adaptive processing and analysis of gray-tone images. An intensity image is represented with a set of local neighborhoods defined for each pixel of the image to be studied. Those so-called General Adaptive Neighborhoods (GANs) are simultaneously adaptive with the spatial structures, the analyzing scales and the physical settings of the image to be addressed and/or the human visual system. The GANs are then used as adaptive operational windows for local image transformations (morphological filters, rank/order filters, …) and for local image analysis (local descriptors, …). Successful application examples have been reported in several image processing areas, e.g., image enhancement, image filtering, image restoration, image focus measurement, edge detection and image segmentation. Bio: Johan Debayle received his Ph.D. degree in the field of image processing and analysis in 2005. In the beginning of 2006, he joined the French National Institute for Research in Computer Science and Control (INRIA) as a postdoctoral fellow in the field of biomedical image analysis. Since 2007, he has been an assistant professor in the Mathematical Imaging and Pattern Analysis Group within the CIS Center and the LPMG Laboratory, UMR CNRS 5148 at the ‘Ecole Nationale Supérieure des Mines de Saint-Etienne’ in France. In 2012, he received the French Habilitation degree (Habilitation à Diriger des Recherches) in the field of mathematical imaging from the University Jean Monnet of Saint-Etienne in France. His research interests include adaptive image processing and morphological analysis. More details can be found
Location: Lagrange Gris
September 14, 2012 at 1:30pm
Invited speaker : Prof. Qiyin Fang,
Canada Research Chair in Biophotonics, McMaster University
Title: Integration of spectroscopy and imaging: new tools for the old light Abstract: Over the past two decades, optical imaging has emerged as an important tool in biomedicine, finding applications from visualization of cellular structures and intracellular processes to minimally invasive diagnosis. In addition to advances in detectors and imaging optics, a number of advanced imaging techniques have been developed to measure spectral and temporal information at each pixel: essentially performing time-laps measurement of spectroscopy for imaging. These techniques, e.g. hyperspectral imaging (HSI) & fluorescence lifetime imaging (FLIM), not only provide additional sources of contrasts but also information on the microenvironment of the targeted biological targets, both of which are critical information for clinical decision making. Although HSI is a relatively mature technique and commercial FLIM systems are now available, a number of obstacles exist that prevent both from clinical applications. Namely, the two main barriers are: 1) miniaturization is required to make them compatible with existing clinical modalities, such as an endoscope; 2) real-time data analysis and decision making algorithms. Various new developments in spectral and temporal-domain imaging techniques will be introduced and their potential applications in minimally invasive medical diagnosis and high content cancer drug screening will be discussed. Results presented will include intraoperative detection of brain tumor margins and development of encapsulated multimodality imaging devices. Finally, novel micro- and naon-optical systems for in-vitro cancer drug screening will be briefly mentioned. Bio: Qiyin Fang is currently an Associate Professor at McMaster University and holds the Canada Research Chair in Biophotonics.
Prior to joining McMaster, Dr. Fang was with the Minimally Invasive Surgical Technology Institute of Cedars-Sinai Medical Center in Los Angeles. Dr. Fang obtained his BSc (Physics) from Nankai University, his MSc (applied physics) and PhD (Biomedical Physics) from East Carolina University. His current research interests include optical spectroscopy and image guided minimally invasive diagnostic and therapeutic devices, miniaturized MOEMS sensors and imaging systems, and advanced optical microscopy and their emerging applications.
Dr. Fang is a senior member and visiting lecturer of SPIE.
July 30, 2012
Invited speaker : Prof. Krishna Mohan Buddhiraju,
Centre of Studies in Resources Engineering, Indian Institute of Technology, Bombay, India
Title: Feature Extraction and Classification of High Resolution Satellite Images Abstract: In view of the increasing resolution of sensors on spaceborne platforms, it is now possible to utilize these imagery for a variety of civilian and defence applications. In order to extract useful information from the high spatial resolution images segmentation a generic framework is employed with modules catering from pre-processing to classification. The classification step involves kernel classification using recently proposed kernels in literature and comparison is made with standard kernels like radial basis function. Results are demonstrated using Worldview and Quickbird images of urban areas. Bio: Prof. Krishna Mohan Buddhiraju (“Krishna”) received his PhD in Electrical Engineering from Indian Institute of Technology Bombay in 1991. His areas of interest include Image Analysis, Geographic Information Systems, Multimedia Educational Content Development for satellite image processing and analysis. He is a member of IEEE, life member of Indian Society of Remote Sensing and Indian Society of Geomatics. He has supervised over 30 Masters students and seven doctoral students. He is recipient of 2003 M.N. Saha Memorial Award for Best Application Oriented Paper published Institution of Electronics and Telecommunication Engineers Journal of Research. He is a regular reviewer of papers submitted to several international journals, and examiner of PhD theses from many universities in India. Location: Euler Bleu
July 23, 2012
Invited speaker : Edouard Barthelet ,
Telecom Bretagne, Brest, France
Title: ML Model Inversion for 3D Building Extraction in HR Optical and SAR Imagery Abstract: The availability of very high resolution remote sensing spatial images in the last decade has offered a wide range of possibilities in urban area analysis. However the interpretation of these images is still a challenging problem in reason for their high geometric and radiometric complexity arising from parallax effects, shadow effects or layover effects which affect above-ground objects. These effects, which become critical in dense urban environment, are all the more difficult to deal with that they are dependent on the acquisition geometry and, for some of them, on the acquisition time.
If specific data sets, like stereoscopic, radargrammetric or interferometric ones, enable to partly overcome these effects by retrieving scene elevation information, such data sets are not always available so that we have to settle for single remote sensing images.
In this context, an approach has been designed in order to extract 3D buildings from building specific primitive binary images, derived from single high resolution remote sensing images. The proposed approach, which addresses building detection as well as building parameter estimation, relies on building projection algorithms in both optical and SAR (Synthetic Aperture Radar) images. Based on these projection algorithms, building primitive can be simulated so that probability density functions of primitive pixel location in primitive binary images can be computed. Building parameters are then estimated through the implementation of a maximum likelihood model inversion method involving the previously computed probability density functions.
The efficiency of the proposed method, which has turned out to reach sub-pixelic estimation accuracy, will finally be demonstrated on simulated and real images of semi urban areas acquired by both optical (QuickBird) and SAR (TerraSar-X) high resolution spatial sensors.
Bio: Edouard Barthelet received the Engineer’s degree from Ecole Centrale Marseille, Marseille, France and the Research Master’s degree in signal and image processing from University Paul Cézanne, Aix-Marseille III, Marseille, France in 2009. He is currently pursuing his Ph.D. degree at Telecom Bretagne, Brest, France, in collaboration with Thales Communications & Security, Paris, France.
His research activities mainly focus on statistical image analysis for urban remote sensing applications, such as 3D-reconstruction, object-based registration and change detection from high resolution optical and SAR images.
Location: Euler Bleu
July 9, 2012
Invited speaker : Prof. Nataliya Zagorodna ,
Department of Computer Science, Ternopil Ivan Pul’uj Technical University, Ternopil, Ukraine
Title: Mathematical Modeling and Statistical Analysis of Cyclic Signals Abstract: The creation or improvement of a great variety of modern automatized systems such as control systems, processing signal systems, diagnostic systems, making decision systems requires availability of state-of-art methods of signals analysis. It is known that the initial stage of investigation of any signal is the creating of an adequate mathematical model for its description. Appropriate methods of statistical analysis could be built in according with mathematical models. Most of the real signals can be successfully represented by nonstationary random processes. However, the statistical analysis of such processes must be provided with the ensemble of realizations. This approach presupposes some significant expenses on obtaining data. There exists such a type of signals, called cyclic signals, which have random nature and at the same time are characterized by repeatedness of some particularities at certain time periods. They arise in many areas such as power industry (water-, electro-, gas consumption), medicine (cardiosignals), computer networks (traffic signals) and many others.
In this talk we will present some mathematical models of cyclic signals, and the particularities of statistical analysis of such signals will be discussed. In addition, the proposed methods that can be successfully applied for statistical analysis of real signals using the stochastic approach while taking into consideration their cyclic nature will be presented as well.
Bio: Dr. Nataliya Zagorodna received her B.S. degree in Computer Science in 2003, and her Ph.D. degree in Mathematical Modeling and Computing Methods (Engineering) in 2007 from Ternopil Ivan Pul’uj State Technical University, Ukraine.
From September 2006 to September 2007, she was an Assistant Professor at the Ternopil Ivan Pul’uj State Technical University, Ukraine. Since September 2007, she has been taking a position of an Associate Professor at the same University. Dr. Zagorodna is also a participant of program granted Ukrainian Ministry of Education, and at the present moment is working (January 2012-May 2012) as a research fellow at Northern Illinois University, IL, USA. Her research interests are in the areas of mathematical modeling, statistical analysis, simulation and prognosis of signals with cyclic nature.
Location: Euler Bleu
June 18, 2012
Invited speaker : Prof. Konrad Schlinder,
Photogrammetry and Remote Sensing Lab, Institute of Geodesy and Photogrammetry, ETH Zurich
Title: Monocular 3D Estimation With Deformable Object Models Abstract: Monocular 3D reconstruction is a geometrically ill-defined problem. Still, it can be accomplished when prior knowledge about the observed objects is available. We revisit ideas from the early days of computer vision, namely, 3D geometric representations of semantically defined object categories. These representations can recover detailed geometric object hypotheses, including the relative 3D positions of object parts. In combination with recent robust techniques for local shape description and inference, such representations can be applied to real-world images. We analyze this approach in detail, and demonstrate novel applications enabled by the geometric object class representation, such as fine-grained categorization of cars according to their 3D geometry, and ultra-wide baseline matching. Bio: Konrad Schindler received a Diplomingenieur (M.tech) degree in photogrammetry from Vienna University of Technology, Austria in 1999, and a PhD in computer science from Graz University of Technology, Austria, in 2003. He has worked as a photogrammetric engineer in the private industry, and held researcher positions in the Computer Graphics and Vision Department of Graz University of Technology, the Digital Perception Lab of Monash University, and the Computer Vision Lab of ETH Zurich. He became assistant professor of Image Understanding at TU Darmstadt in 2009, and since 2010 has been a tenured professor of Photogrammetry and Remote Sensing at ETH Zurich. His research interests lie in the field of computer vision, photogrammetry, and remote sensing. He currently serves as head of the institute of Geodesy and Photogrammetry, and as associate editor for the ISPRS Journal of Photogrammetry and Remote Sensing, and for the Image and Vision Computing Journal. Location: Coriolis, Galois Blg
May 10, 2012 at 2:30pm
Invited speaker : Prof. Ronan Fablet ,
Associate Professor, Telecom Bretagne, Brest, France
Title: Images as random collections of local signatures and objects: application to texture recognition and ocean sensing data Abstract: The extraction and characterization of local image signatures (e.g., image keypoints, image shapes, objects,…) have emerged as powerful tools to provide invariant descriptions of image content for recognition and classification issues. In this talk, we investigate the extent to which such collections of local signatures can be regarded as realizations of (random) point processes. Based on point process statistics and models, we derive image descriptors which embed both visual and spatial information to address recognition and classification issues. We report applications to texture and scene classification as well as to ocean sensing data (especially acoustic seabed sensing and fisheries acoustics).Refs.:
H.-G. Nguyen, R. Fablet, J.M. Boucher. Visual textures as realizations of multivariate log-Gaussian Cox processes, IEEE Conf. on Computer Vision and Pattern recognition, CVPR’2011, Colorado Springs, June 2011.
H.-G. Nguyen, R. Fablet, J.M. Boucher. Keypoint-based analysis of sonar images: application to seabed recognition. IEEE Trans. on Geoscience and Remote Sensing, to appear.
R. Lefort, R. Fablet, L. Berger, J-M. Boucher. Spatial statistics of objects in 3D sonar image: application to fisheries acoustics IEEE Geoscience and Remote Sensing Letters, 9(1):56-59, 2012.
Location: Lagrange Gris
April 19, 2012 at 2:30pm
Invited speaker : Dr. Yuliya Tarabalka,
INRIA-SAM AYIN team and French Space Agency
Title: Hierarchical models and algorithms for classification of very high resolution remote sensing images Abstract: The very high spatial, spectral and temporal resolution of the last generation of imaging sensors provides rich information for every pixel in a particular scene at different time moments, hence increasing the ability to distinguish physical structures in the scene. However, the large number of spectral channels, as well as frequent time series, present challenges for image classification. While pixelwise classification techniques process each pixel independently without considering information about spatial structures, further improvements can be achieved by the incorporation of spatial information in a classifier, especially in areas where structural information is important to distinguish between classes.In this talk, we will present novel strategies for spectral-spatial classification of remote sensing images. We will focus on discussing new hierarchical graph-based models for classification of hyperspectral data. In particular, we will discuss new dissimilarity measures between image regions and convergence criteria. We will show that the new techniques improve classification accuracies and provide classification maps with more homogeneous regions, when compared to previously proposed methods. Finally, we will show that our designed model can be successfully adapted and applied for multitemporal image classification. Bio: Dr. Yuliya Tarabalka received the B.S. degree in computer science from Ternopil Ivan Pul’uj State Technical University, Ukraine, in 2005, M.Sc. degree in signal and image processing from the Grenoble Institute of Technology (INPG), France, in 2007, Ph.D. degree with European label in signal and image processing from INPG, and the Ph.D. degree in electrical engineering from the University of Iceland, in 2010.
From July 2007 to January 2008, she was a Researcher with the Norwegian Defence Research Establishment, Norway. From September 2010 to December 2011, she was a Postdoctoral research fellow with CISTO, NASA Goddard Space Flight Center, Greenbelt, MD, USA. She is currently a Postdoctoral research fellow with INRIA Sophia Antipolis – Méditerranée, team AYIN, France (financed by the French Space Agency CNES). Her research interests are in the areas of image analysis and signal processing, pattern recognition, remote sensing, imaging spectroscopy, and high-performance computing.
Location: Lagrange Gris
March 5, 2012 at 2:30 PM
Invited speaker : Prof. J. Francos,
Elect. and Comp. Eng. Dep., Ben-Gurion University, Beer-Sheva 84105, Israel
Title: Manifold Embedding for Geometric Deformations Estimation Abstract: We introduce a method for geometric deformation estimation of a known object, where the deformation belongs to a known family of deformations. Assume we have a set of observations (for example, images) of different objects, each undergoing different geometric
deformation, yet all the deformations belong to the same family of deformations, Q. As a result of the action of Q, the set of different
realizations of each object is generally a manifold in the space of observations. The manifolds of the different objects are strongly related.
We show that in cases where the set of deformations, Q, admits a finite dimensional representation, there is a non-linear mapping from the space of observations to a low dimensional linear space. The manifold corresponding to each object is mapped to a linear subspace with the same dimension as that of the manifold. This mapping which we call universal manifold embedding enables the estimation of geometric deformations using classical linear theory. The embedding of the space of observations depends on the deformation model, and is independent of the specific observed object, hence it is universal. By analyzing the stochastic properties of the proposed non-linear mapping in the presence of various error sources, optimal estimators of the deformation models are derived. We provide examples of this embedding for the case of elastic deformations of one-dimensional and two-dimensional signals.
Bio: Joseph M. Francos received the B.Sc. degree in computer engineering in 1982, and the D.Sc. degree in electrical engineering in 1991, both from the Technion-Israel Institute of Technology, Haifa. In 1993 he joined the Department of Electrical and Computer Engineering, Ben-Gurion University, Beer-Sheva, Israel, where he is now a Professor. He heads the Mathematical Imaging Group, and the Signal Processing track. He held visiting positions at the Rensselaer Polytechnic Institute, Troy, the Massachusetts Institute of Technology, the University of California, Davis, the University of Illinois, Chicago, the University of California, Santa Cruz. His current research interests are in image registration, estimation of object deformations form images, parametric modeling and estimation of 2-D random fields, random fields theory, and texture analysis and synthesis.. Location: Coriolis, Galois Blg