The system for identifying objects from images includes a module for aggregating and pooling images, the module receiving as input first, second and third pluralities of images labeled respectively by an expert in the field, through machine learning and through deep machine learning, and delivering as output a plurality of pooled labeled adjustment images having the best accuracy; and a module for aggregating and pooling invariants receiving as input the first, second and third pluralities of invariants labeled respectively by the expert in the field, through machine learning and through deep machine learning, and delivering as output a plurality of pooled labeled adjustment invariants having the best accuracy. In response to a new plurality of images of the object to be identified, the first, second and third identification modules are designed to use as input, separately and sequentially for the respective identification thereof, the plurality of pooled labeled adjustment images and/or the plurality of pooled labeled adjustment invariants originating from the aggregation and pooling modules.
G06V 10/46 - Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]Salient regional features
G06V 10/778 - Active pattern-learning, e.g. online learning of image or video features
A method for characterizing cutaneous pigmentary disorders in an individual which includes: for each pigmentary disorder: on a date tkm acquiring 2D images of a pigmentary disorder from a plurality of angles and reconstructing at least one 3D image; storing the images in a first folder; on the basis of the images, calculating parameters of the pigmentary disorder, and storing in a second folder; evaluating the parameters and storing in a third folder; iterating at least one of the four preceding steps on multiple dates, and for each iteration: comparing the data for at least one period and identifying the changes; storing in a fourth folder per period; for each fourth folder, grouping the folders together in a fifth folder defining a snapshot of the pigmentary disorder; aggregating the fifth folders in a sixth folder defining a dynamic profile of the pigmentary disorder; iterating the preceding steps to obtain a sixth folder for each additional pigmentary disorder; and generating, for the individual, a knowledge base of their pigmentary disorders aggregating the sixth folders.
An imaging system for a cutaneous pigment disorder which includes a device for acquiring 2D images of the pigment disorder, comprising: a light source comprising at least one emitter and receivers, a unit for processing the 2D images acquired, means for displaying the images processed. The invention further comprises: means for positioning the receivers at at least two viewing angles, which positioning means are distributed along a spherical cap, a structure for protecting the acquisition device and an operator, comprising a receiver-positioning opening, the protection structure being intended to be positioned on the skin. The processing unit comprises a 3D reconstruction module with a reflective Radon transform in order to obtain a 3D image which is reconstructed from the 2D images processed and the display means further comprise means for displaying the reconstructed 3D image.
A method for an object of interest within a degraded 2D digital image of said object. The method comprises the following steps: - detecting (11), beforehand, the object of interest within a 2D digital image and assigning it a label; - reconstructing (13) a 3D volume of said object thus labelled from a plurality of available 2D digital images (12) of said object of interest; - storing, in a database, a record in relation to said object thus reconstructed in 3D form and labelled; - for each record thus stored, - generating (21) a new plurality of 2D digital images in a plurality of viewing modes from the 3D volume thus reconstructed (14) of each object; - training (23) a neural network on a learning set formed of an expanded set of 2D digital images thus generated and matching (22) the label of the object of interest to be recognized; - from a degraded 2D digital image of said object of interest to be recognized; - using (30) the neural network thus trained to deliver, at output, the label of the object and a confidence index linked to the recognition of the object of interest.
The method comprises the following steps: - from a 3D volume in voxels (11) of the complex scene, obtaining k 2D sections (15) in the 3D volume (11); - for each input 2D section (21) obtained in this way, automatically detecting, locating and identifying objects (25) of interest by means of a specialised artificial intelligence method (23) arranged to deliver, as output: - a label corresponding to each object identified in the current input section (k) (21); - a 2D bounding box (24) for each object (25) labelled in this way; - a 2D icon defined by the 2D bounding box (24) extracted in this way; - for each output 2D section (23), semantically segmenting each 2D icon defined by a 2D bounding box (24), and - concatenating the 3D results of all the output 2D sections (23) in order to generate the consolidated labels of the objects of interest (25), generate 3D bounding boxes and generate 3D icons segmented in this way.
The system for identifying objects from images comprises a module for aggregating and sharing images, the module receiving as input first, second and third pluralities of images pixelated respectively by an expert in the field, through machine learning and deep machine learning, and providing as output a plurality of shared pixelated adjustment images having maximum detail; and a module for aggregating and sharing invariants receiving as input the first, second and third pluralities of invariants pixelated respectively by an expert in the field, through machine learning and deep machine learning, and providing as output a plurality of shared pixelated adjustment invariants having maximum detail. In response to a new plurality of images of the object to be identified, the first, second and third identification modules are designed to use as input, separately and sequentially for the respective identification thereof, the plurality of shared pixelated adjustment images and/or the plurality of shared pixelated adjustment invariants originating from the aggregation and sharing modules.
The invention relates to a method for characterizing cutaneous pigmentary disorders in an individual which includes: - for each pigmentary disorder: -- on a date tkm --- acquiring 2D images of a pigmentary disorder from a plurality of angles and reconstructing at least one 3D image; --- storing said images in a first folder; --- on the basis of said images, calculating parameters of the pigmentary disorder, and storing in a second folder; --- evaluating said parameters and storing in a third folder; -- iterating at least one of the four preceding steps on multiple dates, and for each iteration: --- comparing the data for at least one period and identifying the changes; --- storing in a fourth folder per period; --- for each fourth folder, grouping the folders together in a fifth folder defining a snapshot of the pigmentary disorder; --- aggregating the fifth folders in a sixth folder defining a dynamic profile of the pigmentary disorder; - iterating the preceding steps to obtain a sixth folder for each additional pigmentary disorder; and - generating, for the individual, a knowledge base of their pigmentary disorders aggregating the sixth folders.
Imaging system for a cutaneous pigment disorder which comprises: - an device (A) for acquiring 2D images of the pigment disorder, comprising: a light source comprising at least one emitter and receivers, - a unit (B) for processing the 2D images acquired, - means (D1) for displaying the images processed. The invention further comprises: - means for positioning the receivers at at least two viewing angles, which positioning means are distributed along a spherical cap, - a structure for protecting the acquisition device (A) and an operator, comprising a receiver-positioning opening, the protection structure being intended to be positioned on the skin. The processing unit (8) comprises a 3D reconstruction module with a reflective Radon transform in order to obtain a 3D image which is reconstructed from the 2D images processed and the display means further comprise means (D2) for displaying the reconstructed 3D image.
A method for discriminating and identifying, by 3D imaging, an object in a complex scene comprises the following steps: generating a sequence of 2D MIP images of the object, from a 3D voxel volume of the complex scene, this volume visualized by an operator by using an iterative process of MIP type from a projection plane and an intensity threshold determined by the operator on each iteration, automatically extracting, from the sequence of images, coordinates of a reduced volume corresponding to the sequence of images, choosing one of the intensity thresholds used during the iterations, automatically extracting, from the 3D volume of the complex scene, from the coordinates and chosen intensity threshold, a reduced 3D volume containing the object, automatically generating, from the reduced volume, by intensity threshold optimization, an optimized intensity threshold and an optimized voxel volume, a color being associated with each intensity, identifying the object by visualization.
The invention relates to a method for discrimination and identification of an object in a complex scene by 3-D imaging. The method comprises the following steps: - generating a sequence of 2-D MIP images of the object from a 3-D voxel volume of the complex scene, said volume being visualized by an operator using an iterative MIP type process, from a projection plane and from an intensity threshold determined by the operator at each iteration, - automatically extracting, from the sequence of images, the coordinates of a reduced volume corresponding to the sequence of images, - selecting one of the intensity thresholds utilized during the iterations, - automatically extracting coordinates from the 3-D volume of the complex scene, and a reduced 3-D volume containing the object from the selected intensity threshold, - automatically generating an optimized intensity threshold and an optimized voxel volume from the reduced volume, by optimization of the intensity threshold, a colour being associated with each intensity, - identifying the object by visualization.
A method for 3D reconstruction of an object based on back-scattered and sensed signals, including: generating, from the sensed signals, 3D points to which their back-scattering intensity is respectively assigned, which form a set A of reconstructed data, starting from A, extracting a set B of data, whose points are located within a volume containing the object, as a function of volume characteristics F2, starting from B, extracting a set C of data characterizing the external surface of the object, the surface having regions with missing parts, depending on an extraction criterion, based on C, filling in the regions with missing parts by generation of a three-dimensional surface so as to obtain a set D of completed data of the object, without having to use an external database, and identifying the object based on D.
The invention relates to a method for 3D reconstruction of an object based on back-scattered and sensed signals, which comprises:- Step 1) generate, from the sensed signals, 3D points to which their back-scattering intensity is respectively assigned, which form a set A of reconstructed data,- Step 2) starting from A, extract a set B of data, whose points are located within a volume containing the object, as a function of volume characteristics F2,- Step 3) starting from B, extract a set C of data characterizing the external surface of the object, this surface having regions with missing parts, depending on an extraction criterion,- Step 4) based on C, fill in the regions with missing parts by generation of three-dimensional surface so as to obtain a set D of completed data of the object, without having to use an external database,- Step 5) identify the object based on D.FIGURE 3
A method for synthetic reconstruction of objects includes: extracting criteria from a knowledge base; extracting, from sensed signals filtered by the criteria, weak signals; extracting, from the weak signals, weak signals of interest; removing noise from and amplifying the weak signals of interest and obtaining useful weak signals; identifying useful direct information, from useful weak signals filtered by the criteria and supplying optimum criteria; reconstructing, using the useful direct information, information of interest; reconstructing, using the information of interest, useful information and supplying optimum criteria; reconstructing, based on the useful information, three-dimensional information, supplying a recognition state file and supplying the optimum criteria; and updating the criteria with the optimum criteria.
A method for synthetic reconstruction of objects includes: extracting criteria from a knowledge base; extracting, from sensed signals filtered by the criteria, weak signals; extracting, from the weak signals, weak signals of interest; removing noise from and amplifying the weak signals of interest and obtaining useful weak signals; identifying useful direct information, from useful weak signals filtered by the criteria and supplying optimum criteria; reconstructing, using the useful direct information, information of interest; reconstructing, using the information of interest, useful information and supplying optimum criteria; reconstructing, based on the useful information, three-dimensional information, supplying a recognition state file and supplying the optimum criteria; and updating the criteria with the optimum criteria.
G01B 11/24 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
G01B 15/04 - Measuring arrangements characterised by the use of electromagnetic waves or particle radiation, e.g. by the use of microwaves, X-rays, gamma rays or electrons for measuring contours or curvatures
G01B 17/06 - Measuring arrangements characterised by the use of infrasonic, sonic, or ultrasonic vibrations for measuring contours or curvatures
G06N 5/00 - Computing arrangements using knowledge-based models
15.
METHOD FOR THE THREE-DIMENSIONAL SYNTHETIC RECONSTRUCTION OF OBJECTS EXPOSED TO AN ELECTROMAGNETIC AND/OR ELASTIC WAVE
The invention relates to a method for the synthetic reconstruction of objects exposed to an electromagnetic and/or elastic wave that comprises identifying a useful piece of three-dimensional information from received signals, in particular weak, noisy signals. The method comprises the steps of: extracting (A11), (A12), (A2), (A31), (A32), (A4) criteria (2), (3), (4), (6), (7) and a grid (5) from the knowledge base; extracting (B1) weak signals from the received signals (8) filtered by the criteria (2); extracting (B2) weak signals of interest (10) from the weak signals (9) filtered by the criteria (3); removing the noise from and amplifying the weak signals of interest (10) and obtaining useful weak signals (11); identifying (C) a piece of direct useful information (12) from the useful weak signals (11) filtered by the criteria (4) and providing optimal criteria (2') and (3'); reconstructing (D1) an piece of information of interest (13) from the piece of direct useful information (12) filtered by the grids (5) and providing optimal grids (5’); reconstructing (D2) a piece of useful information (14) from the piece of information of interest (13) filtered by the criteria (6) and providing optimal criteria (6’); reconstructing (D3) a piece of three-dimensional information (15) of the object from the piece of useful information (4) filtered by the criteria (7), providing a recognition state file (16) and providing optimal criteria (7’); updating (E1), (E2), (E31), (E32), (E4) in the knowledge base (1) the criteria (2), (3), (6), (7) and the grid (5) using the optimal criteria (2'), (3'), (6'), (7') and the optimal grid (5’) or replacing the criteria (2), (3), (6), (7) and the grid (5). The method can be used for identifying objects of interest in the management of risks and performance in the industrial, medical, security and defence domains.
G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
G01B 17/06 - Measuring arrangements characterised by the use of infrasonic, sonic, or ultrasonic vibrations for measuring contours or curvatures
G06K 9/54 - Combinations of preprocessing functions