Software

  • OBL



    • OBL is a library for visual localization in the monocular context which exploits correspondences between 2d objects detected in the image and 3d objects. The 2d objects are modelled with ellipses and the 3d objects with ellipsoids.


    • The library EllCV, written in C, aims to perform pose computation, and reconstruction using ellipse (2D) – ellipsoid (3D) correspondences.

      A client-server-service model has also been set up to enable remote access to these applications. Services are launched by the server in the form of dedicated docker containers running on a host machine.




  • DeepEllPose



    • DeepEllPose leverages 3D-aware ellipses prediction to improve the accuracy of camera pose estimation based on ellipses/ellipsoids.


    • DeepEllPose contains the training and inference code of the object detection and ellipse prediction networks (PyTorch). This code corresponds to an extended implementation of the article: 3D-Aware Ellipse Prediction for Object-Based Camera Pose Estimation \cite{zins:hal-03602394}. The 7-Scenes dataset is used as example, but the provided tools make it easily applicable to other datasets. The library also contains the code for camera pose estimation from 3 pairs ellipse-ellipsoid, as well as tools to easily manipulate and visualize such objects.


    • https://gitlab.inria.fr/tangram/3d-aware-ellipses-for-visual-localization
  • >V<



    • A fast and accurate method for vanishing point detection in uncalibrated images of man made environments.


    • >V< is a Matlab implementation of the a-contrario method published at ECCV'2018 \cite{m:simon:hal-01865251}. It allows detecting the zenith (vertical vanishing point) and all horizontal vanishing points in uncalibrated images of man made environments (urban, indoor, industrial, …). In addition, >V< can automatically associate a Manhattan frame to the scene, that is three particular vanishing points whose directions are pairwise orthogonal and aligned with some box structures of the scene, e.g. the buildings. This allows getting e.g. the camera focal length and/or the orientation of the camera with regard to these structures. It can also help reconstructing the scene from image analysis. Finally, we added some code to warp (rectify) an image so that all the vertical planes present in this image appear as in a frontal view.


    • https://members.loria.fr/GSimon/software/v/
  • PoLAR



    • Portable Library for Augmented Reality


    • PoLAR (Portable Library for Augmented Reality) is a framework which aims to help creating graphical applications for augmented reality, image visualization and medical imaging. PoLAR was designed to offer powerful visualization functionalities without the need to be a specialist in Computer Graphics. The framework provides an API to state-of-the-art libraries: Qt to build GUIs and OpenSceneGraph for high-end visualization, for researchers and engineers with a background in Computer Vision to be able to create beautiful AR applications, with little programming effort.
      The framework is written in C++ and published under the GNU GPL license


    • http://polar.inria.fr
  • BSpeckleRender




    • This library implements a new method for synthesizing speckle images deformed by an arbitrary deformation field set by the user. Such images are very useful for assessing the different methods based on digital image correlation (DIC) for estimating displacement fields in experimental mechanics. Since the deformations are very small, it is necessary to ensure that no additional bias is introduced by the image synthesis algorithm. The proposed method is based on the Monte Carlo evaluation of images generated by a Boolean model.


    • https://members.loria.fr/FSur/software/BSpeckleRender/
  • NoLoDuDoCT




    • This is an algorithm decomposing images into cartoon and texture components. Spectrum components of textures are detected on the basis of a statistical hypothesis test, the null hypothesis modeling a purely cartoon patch. Statistics are estimated in a non-local way.


    • https://members.loria.fr/FSur/software/NoLoDuDoCT/
  • OA-SLAM



    • OA-SLAM builds on ORB-SLAM and takes advantage of objects in Simultaneous Localization and Mapping in unseen worlds and proposes an object-aided system (OA-SLAM). Objects are detected in 2d images and reconstructed as ellipsoids in an automatic way. The use of objects dramatically improves the relocalization capabilities of the system.


    • OA-SLAM uses objects as landmarks to improve the relocalization capabilities of SLAM systems. OA-SLAM builds on the point-based ORB-SLAM2. It allows online reconstruction of 3D objects modeled as ellipsoids from their detections in 2D images. OA-SLAM dramatically improves the relocalization capabilities of SLAM.


    • https://gitlab.inria.fr/tangram/oa-slam
  • StrainNet



    • StrainNet estimates subpixelic displacement and strain fields from pairs of reference and deformed images of a flat speckled surface, as Digital Image Correlation (DIC) does. See papers [1] and [2] for details.

      [1]S. Boukhtache, K. Abdelouahab, F. Berry, B. Blaysat, M. Grédiac and F. Sur. "When Deep Learning Meets Digital Image Correlation", Optics and Lasers in Engineering, Volume 136, 2021. Available at: https://hal.archives-ouvertes.fr/hal-02933431

      [2] S. Boukhtache, K. Abdelouahab, A. Bahou, F. Berry, B. Blaysat, M. Grédiac and F. Sur. "A lightweight convolutional neural network as an alternative to DIC to measure in-plane displacement fields", Optics and Lasers in Engineering, 2022.



    • Pytorch implementation


    • https://github.com/DreamIP/StrainNet

Comments are closed.