Sispia

France

Back to Profile

1-15 of 15 for Sispia Sort by
Query
Aggregations
Jurisdiction
        World 7
        United States 6
        Canada 2
Date
2022 3
2021 2
Before 2021 10
IPC Class
A61B 5/00 - Measuring for diagnostic purposes Identification of persons 4
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints 4
G06T 7/00 - Image analysis 3
G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object 2
G01B 17/06 - Measuring arrangements characterised by the use of infrasonic, sonic, or ultrasonic vibrations for measuring contours or curvatures 2
See more
Status
Pending 3
Registered / In Force 12
Found results for  patents

1.

METHOD AND SYSTEM FOR IDENTIFYING OBJECTS FROM LABELED IMAGES OF SAID OBJECTS

      
Application Number 17598177
Status Pending
Filing Date 2020-03-16
First Publication Date 2022-06-09
Owner
  • SISPIA (France)
  • THALES (France)
Inventor
  • Berechet, Stefan
  • Berechet, Ion
  • Berginc, Gérard

Abstract

The system for identifying objects from images includes a module for aggregating and pooling images, the module receiving as input first, second and third pluralities of images labeled respectively by an expert in the field, through machine learning and through deep machine learning, and delivering as output a plurality of pooled labeled adjustment images having the best accuracy; and a module for aggregating and pooling invariants receiving as input the first, second and third pluralities of invariants labeled respectively by the expert in the field, through machine learning and through deep machine learning, and delivering as output a plurality of pooled labeled adjustment invariants having the best accuracy. In response to a new plurality of images of the object to be identified, the first, second and third identification modules are designed to use as input, separately and sequentially for the respective identification thereof, the plurality of pooled labeled adjustment images and/or the plurality of pooled labeled adjustment invariants originating from the aggregation and pooling modules.

IPC Classes  ?

  • G06V 10/46 - Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]Salient regional features
  • G06V 10/778 - Active pattern-learning, e.g. online learning of image or video features
  • G06N 20/20 - Ensemble learning

2.

METHOD AND SYSTEM FOR CHARACTERIZING PIGMENTARY DISORDERS IN AN INDIVIDUAL

      
Application Number 17413506
Status Pending
Filing Date 2019-12-13
First Publication Date 2022-03-10
Owner
  • THALES (France)
  • SISPIA (France)
Inventor
  • Berechet, Ion
  • Berginc, Gérard
  • Berechet, Stefan

Abstract

A method for characterizing cutaneous pigmentary disorders in an individual which includes: for each pigmentary disorder: on a date tkm acquiring 2D images of a pigmentary disorder from a plurality of angles and reconstructing at least one 3D image; storing the images in a first folder; on the basis of the images, calculating parameters of the pigmentary disorder, and storing in a second folder; evaluating the parameters and storing in a third folder; iterating at least one of the four preceding steps on multiple dates, and for each iteration: comparing the data for at least one period and identifying the changes; storing in a fourth folder per period; for each fourth folder, grouping the folders together in a fifth folder defining a snapshot of the pigmentary disorder; aggregating the fifth folders in a sixth folder defining a dynamic profile of the pigmentary disorder; iterating the preceding steps to obtain a sixth folder for each additional pigmentary disorder; and generating, for the individual, a knowledge base of their pigmentary disorders aggregating the sixth folders.

IPC Classes  ?

  • A61B 5/00 - Measuring for diagnostic purposes Identification of persons
  • A61B 5/103 - Measuring devices for testing the shape, pattern, size or movement of the body or parts thereof, for diagnostic purposes
  • G06T 7/00 - Image analysis
  • G06T 7/90 - Determination of colour characteristics
  • G16H 30/20 - ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

3.

2D AND 3D IMAGING SYSTEM FOR A CUTANEOUS PIGMENT DISORDER

      
Application Number 17291598
Status Pending
Filing Date 2019-11-06
First Publication Date 2022-01-20
Owner
  • THALES (France)
  • SISPIA (France)
Inventor
  • Berechet, Ion
  • Berginc, Gérard
  • Berechet, Stefan

Abstract

An imaging system for a cutaneous pigment disorder which includes a device for acquiring 2D images of the pigment disorder, comprising: a light source comprising at least one emitter and receivers, a unit for processing the 2D images acquired, means for displaying the images processed. The invention further comprises: means for positioning the receivers at at least two viewing angles, which positioning means are distributed along a spherical cap, a structure for protecting the acquisition device and an operator, comprising a receiver-positioning opening, the protection structure being intended to be positioned on the skin. The processing unit comprises a 3D reconstruction module with a reflective Radon transform in order to obtain a 3D image which is reconstructed from the 2D images processed and the display means further comprise means for displaying the reconstructed 3D image.

IPC Classes  ?

  • A61B 5/00 - Measuring for diagnostic purposes Identification of persons
  • G06T 11/00 - 2D [Two Dimensional] image generation

4.

OBJECT RECOGNITION METHOD WITH INCREASED REPRESENTATIVENESS

      
Application Number EP2020078197
Publication Number 2021/069536
Status In Force
Filing Date 2020-10-08
Publication Date 2021-04-15
Owner
  • SISPIA (France)
  • THALES (France)
Inventor
  • Berechet, Stefan
  • Berechet, Ion
  • Berginc, Gérard

Abstract

A method for an object of interest within a degraded 2D digital image of said object. The method comprises the following steps: - detecting (11), beforehand, the object of interest within a 2D digital image and assigning it a label; - reconstructing (13) a 3D volume of said object thus labelled from a plurality of available 2D digital images (12) of said object of interest; - storing, in a database, a record in relation to said object thus reconstructed in 3D form and labelled; - for each record thus stored, - generating (21) a new plurality of 2D digital images in a plurality of viewing modes from the 3D volume thus reconstructed (14) of each object; - training (23) a neural network on a learning set formed of an expanded set of 2D digital images thus generated and matching (22) the label of the object of interest to be recognized; - from a degraded 2D digital image of said object of interest to be recognized; - using (30) the neural network thus trained to deliver, at output, the label of the object and a confidence index linked to the recognition of the object of interest.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

5.

METHOD AND SYSTEM FOR AUTOMATICALLY DETECTING, LOCATING AND IDENTIFYING OBJECTS IN A 3D VOLUME

      
Application Number EP2020069056
Publication Number 2021/008928
Status In Force
Filing Date 2020-07-07
Publication Date 2021-01-21
Owner
  • SISPIA (France)
  • THALES (France)
Inventor
  • Berechet, Stefan
  • Berechet, Ion
  • Berginc, Gérard

Abstract

The method comprises the following steps: - from a 3D volume in voxels (11) of the complex scene, obtaining k 2D sections (15) in the 3D volume (11); - for each input 2D section (21) obtained in this way, automatically detecting, locating and identifying objects (25) of interest by means of a specialised artificial intelligence method (23) arranged to deliver, as output: - a label corresponding to each object identified in the current input section (k) (21); - a 2D bounding box (24) for each object (25) labelled in this way; - a 2D icon defined by the 2D bounding box (24) extracted in this way; - for each output 2D section (23), semantically segmenting each 2D icon defined by a 2D bounding box (24), and - concatenating the 3D results of all the output 2D sections (23) in order to generate the consolidated labels of the objects of interest (25), generate 3D bounding boxes and generate 3D icons segmented in this way.

IPC Classes  ?

  • G06K 9/32 - Aligning or centering of the image pick-up or image-field
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

6.

METHOD AND SYSTEM FOR IDENTIFYING OBJECTS FROM PIXELATED IMAGES OF SAID OBJECTS

      
Application Number EP2020057107
Publication Number 2020/193253
Status In Force
Filing Date 2020-03-16
Publication Date 2020-10-01
Owner
  • SISPIA (France)
  • THALES (France)
Inventor
  • Berechet, Stefan
  • Berechet, Ion
  • Berginc, Gérard

Abstract

The system for identifying objects from images comprises a module for aggregating and sharing images, the module receiving as input first, second and third pluralities of images pixelated respectively by an expert in the field, through machine learning and deep machine learning, and providing as output a plurality of shared pixelated adjustment images having maximum detail; and a module for aggregating and sharing invariants receiving as input the first, second and third pluralities of invariants pixelated respectively by an expert in the field, through machine learning and deep machine learning, and providing as output a plurality of shared pixelated adjustment invariants having maximum detail. In response to a new plurality of images of the object to be identified, the first, second and third identification modules are designed to use as input, separately and sequentially for the respective identification thereof, the plurality of shared pixelated adjustment images and/or the plurality of shared pixelated adjustment invariants originating from the aggregation and sharing modules.

IPC Classes  ?

  • G06K 9/62 - Methods or arrangements for recognition using electronic means

7.

METHOD AND SYSTEM FOR CHARACTERIZING PIGMENTARY DISORDERS IN AN INDIVIDUAL

      
Application Number EP2019085162
Publication Number 2020/126932
Status In Force
Filing Date 2019-12-13
Publication Date 2020-06-25
Owner
  • THALES (France)
  • SISPIA (France)
Inventor
  • Berechet, Ion
  • Berginc, Gérard
  • Berechet, Stefan

Abstract

The invention relates to a method for characterizing cutaneous pigmentary disorders in an individual which includes: - for each pigmentary disorder: -- on a date tkm --- acquiring 2D images of a pigmentary disorder from a plurality of angles and reconstructing at least one 3D image; --- storing said images in a first folder; --- on the basis of said images, calculating parameters of the pigmentary disorder, and storing in a second folder; --- evaluating said parameters and storing in a third folder; -- iterating at least one of the four preceding steps on multiple dates, and for each iteration: --- comparing the data for at least one period and identifying the changes; --- storing in a fourth folder per period; --- for each fourth folder, grouping the folders together in a fifth folder defining a snapshot of the pigmentary disorder; --- aggregating the fifth folders in a sixth folder defining a dynamic profile of the pigmentary disorder; - iterating the preceding steps to obtain a sixth folder for each additional pigmentary disorder; and - generating, for the individual, a knowledge base of their pigmentary disorders aggregating the sixth folders.

IPC Classes  ?

  • A61B 5/00 - Measuring for diagnostic purposes Identification of persons

8.

2D AND 3D IMAGING SYSTEM FOR A CUTANEOUS PIGMENT DISORDER

      
Application Number EP2019080330
Publication Number 2020/099198
Status In Force
Filing Date 2019-11-06
Publication Date 2020-05-22
Owner
  • THALES (France)
  • SISPIA (France)
Inventor
  • Berechet, Ion
  • Berginc, Gérard
  • Berechet, Stefan

Abstract

Imaging system for a cutaneous pigment disorder which comprises: - an device (A) for acquiring 2D images of the pigment disorder, comprising: a light source comprising at least one emitter and receivers, - a unit (B) for processing the 2D images acquired, - means (D1) for displaying the images processed. The invention further comprises: - means for positioning the receivers at at least two viewing angles, which positioning means are distributed along a spherical cap, - a structure for protecting the acquisition device (A) and an operator, comprising a receiver-positioning opening, the protection structure being intended to be positioned on the skin. The processing unit (8) comprises a 3D reconstruction module with a reflective Radon transform in order to obtain a 3D image which is reconstructed from the 2D images processed and the display means further comprise means (D2) for displaying the reconstructed 3D image.

IPC Classes  ?

  • A61B 5/00 - Measuring for diagnostic purposes Identification of persons

9.

Method for discrimination and identification of objects of a scene by 3-D imaging

      
Application Number 15534869
Grant Number 10339698
Status In Force
Filing Date 2015-12-17
First Publication Date 2017-11-30
Grant Date 2019-07-02
Owner
  • THALES (France)
  • UNIVERSITE DE LORRAINE (France)
  • SISPIA (France)
Inventor
  • Berechet, Stefan
  • Berechet, Ion
  • Bellet, Jean-Baptiste
  • Berginc, Gérard

Abstract

A method for discriminating and identifying, by 3D imaging, an object in a complex scene comprises the following steps: generating a sequence of 2D MIP images of the object, from a 3D voxel volume of the complex scene, this volume visualized by an operator by using an iterative process of MIP type from a projection plane and an intensity threshold determined by the operator on each iteration, automatically extracting, from the sequence of images, coordinates of a reduced volume corresponding to the sequence of images, choosing one of the intensity thresholds used during the iterations, automatically extracting, from the 3D volume of the complex scene, from the coordinates and chosen intensity threshold, a reduced 3D volume containing the object, automatically generating, from the reduced volume, by intensity threshold optimization, an optimized intensity threshold and an optimized voxel volume, a color being associated with each intensity, identifying the object by visualization.

IPC Classes  ?

10.

METHOD FOR DISCRIMINATION AND IDENTIFICATION OF OBJECTS OF A SCENE BY 3-D IMAGING

      
Application Number EP2015080258
Publication Number 2016/097168
Status In Force
Filing Date 2015-12-17
Publication Date 2016-06-23
Owner
  • THALES (France)
  • UNIVERSITE DE LORRAINE (France)
  • SISPIA (France)
Inventor
  • Berechet, Stefan
  • Berechet, Ion
  • Bellet, Jean-Baptiste
  • Berginc, Gérard

Abstract

The invention relates to a method for discrimination and identification of an object in a complex scene by 3-D imaging. The method comprises the following steps: - generating a sequence of 2-D MIP images of the object from a 3-D voxel volume of the complex scene, said volume being visualized by an operator using an iterative MIP type process, from a projection plane and from an intensity threshold determined by the operator at each iteration, - automatically extracting, from the sequence of images, the coordinates of a reduced volume corresponding to the sequence of images, - selecting one of the intensity thresholds utilized during the iterations, - automatically extracting coordinates from the 3-D volume of the complex scene, and a reduced 3-D volume containing the object from the selected intensity threshold, - automatically generating an optimized intensity threshold and an optimized voxel volume from the reduced volume, by optimization of the intensity threshold, a colour being associated with each intensity, - identifying the object by visualization.

IPC Classes  ?

  • G06T 7/00 - Image analysis
  • G06T 15/08 - Volume rendering
  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

11.

Method for 3D reconstruction of an object in a scene

      
Application Number 13604237
Grant Number 09135747
Status In Force
Filing Date 2012-09-05
First Publication Date 2013-04-25
Grant Date 2015-09-15
Owner
  • THALES (France)
  • SISPIA (France)
Inventor
  • Berechet, Ion
  • Berginc, Gérard
  • Berechet, Stefan

Abstract

A method for 3D reconstruction of an object based on back-scattered and sensed signals, including: generating, from the sensed signals, 3D points to which their back-scattering intensity is respectively assigned, which form a set A of reconstructed data, starting from A, extracting a set B of data, whose points are located within a volume containing the object, as a function of volume characteristics F2, starting from B, extracting a set C of data characterizing the external surface of the object, the surface having regions with missing parts, depending on an extraction criterion, based on C, filling in the regions with missing parts by generation of a three-dimensional surface so as to obtain a set D of completed data of the object, without having to use an external database, and identifying the object based on D.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06T 17/05 - Geographic models
  • G06K 9/46 - Extraction of features or characteristics of the image

12.

METHOD FOR 3D RECONSTRUCTION OF AN OBJECT IN A SCENE

      
Document Number 02789226
Status In Force
Filing Date 2012-09-05
Open to Public Date 2013-04-21
Grant Date 2019-11-26
Owner
  • SISPIA (France)
  • THALES (France)
Inventor
  • Berechet, Ion
  • Berginc, Gerard
  • Berechet, Stefan

Abstract

The invention relates to a method for 3D reconstruction of an object based on back-scattered and sensed signals, which comprises:- Step 1) generate, from the sensed signals, 3D points to which their back-scattering intensity is respectively assigned, which form a set A of reconstructed data,- Step 2) starting from A, extract a set B of data, whose points are located within a volume containing the object, as a function of volume characteristics F2,- Step 3) starting from B, extract a set C of data characterizing the external surface of the object, this surface having regions with missing parts, depending on an extraction criterion,- Step 4) based on C, fill in the regions with missing parts by generation of three-dimensional surface so as to obtain a set D of completed data of the object, without having to use an external database,- Step 5) identify the object based on D.FIGURE 3

IPC Classes  ?

  • A63J 5/02 - Arrangements for making stage effectsAuxiliary stage appliances
  • A63J 5/10 - Arrangements for making visible or audible the words spoken

13.

Method for the three-dimensional synthetic reconstruction of objects exposed to an electromagnetic and/or elastic wave

      
Application Number 12934101
Grant Number 08345960
Status In Force
Filing Date 2009-03-24
First Publication Date 2011-01-27
Grant Date 2013-01-01
Owner
  • Thales (France)
  • SISPIA (France)
Inventor
  • Berginc, Gérard
  • Berechet, Ion
  • Berechet, Stefan

Abstract

A method for synthetic reconstruction of objects includes: extracting criteria from a knowledge base; extracting, from sensed signals filtered by the criteria, weak signals; extracting, from the weak signals, weak signals of interest; removing noise from and amplifying the weak signals of interest and obtaining useful weak signals; identifying useful direct information, from useful weak signals filtered by the criteria and supplying optimum criteria; reconstructing, using the useful direct information, information of interest; reconstructing, using the information of interest, useful information and supplying optimum criteria; reconstructing, based on the useful information, three-dimensional information, supplying a recognition state file and supplying the optimum criteria; and updating the criteria with the optimum criteria.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

14.

METHOD FOR THE THREE-DIMENSIONAL SYNTHETIC RECONSTRUCTION OF OBJECTS EXPOSED TO AN ELECTROMAGNETIC AND/OR ELASTIC WAVE

      
Document Number 02719142
Status In Force
Filing Date 2009-03-24
Open to Public Date 2009-10-01
Grant Date 2016-05-03
Owner
  • THALES (France)
  • SISPIA (France)
Inventor
  • Berginc, Gerard
  • Berechet, Ion
  • Berechet, Stefan

Abstract

A method for synthetic reconstruction of objects includes: extracting criteria from a knowledge base; extracting, from sensed signals filtered by the criteria, weak signals; extracting, from the weak signals, weak signals of interest; removing noise from and amplifying the weak signals of interest and obtaining useful weak signals; identifying useful direct information, from useful weak signals filtered by the criteria and supplying optimum criteria; reconstructing, using the useful direct information, information of interest; reconstructing, using the information of interest, useful information and supplying optimum criteria; reconstructing, based on the useful information, three-dimensional information, supplying a recognition state file and supplying the optimum criteria; and updating the criteria with the optimum criteria.

IPC Classes  ?

  • G01B 11/24 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
  • G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
  • G01B 15/04 - Measuring arrangements characterised by the use of electromagnetic waves or particle radiation, e.g. by the use of microwaves, X-rays, gamma rays or electrons for measuring contours or curvatures
  • G01B 17/06 - Measuring arrangements characterised by the use of infrasonic, sonic, or ultrasonic vibrations for measuring contours or curvatures
  • G06N 5/00 - Computing arrangements using knowledge-based models

15.

METHOD FOR THE THREE-DIMENSIONAL SYNTHETIC RECONSTRUCTION OF OBJECTS EXPOSED TO AN ELECTROMAGNETIC AND/OR ELASTIC WAVE

      
Application Number EP2009053447
Publication Number 2009/118314
Status In Force
Filing Date 2009-03-24
Publication Date 2009-10-01
Owner
  • THALES (France)
  • SISPIA (France)
Inventor
  • Berginc, Gérard
  • Berechet, Ion
  • Berechet, Stefan

Abstract

The invention relates to a method for the synthetic reconstruction of objects exposed to an electromagnetic and/or elastic wave that comprises identifying a useful piece of three-dimensional information from received signals, in particular weak, noisy signals. The method comprises the steps of: extracting (A11), (A12), (A2), (A31), (A32), (A4) criteria (2), (3), (4), (6), (7) and a grid (5) from the knowledge base; extracting (B1) weak signals from the received signals (8) filtered by the criteria (2); extracting (B2) weak signals of interest (10) from the weak signals (9) filtered by the criteria (3); removing the noise from and amplifying the weak signals of interest (10) and obtaining useful weak signals (11); identifying (C) a piece of direct useful information (12) from the useful weak signals (11) filtered by the criteria (4) and providing optimal criteria (2') and (3'); reconstructing (D1) an piece of information of interest (13) from the piece of direct useful information (12) filtered by the grids (5) and providing optimal grids (5’); reconstructing (D2) a piece of useful information (14) from the piece of information of interest (13) filtered by the criteria (6) and providing optimal criteria (6’); reconstructing (D3) a piece of three-dimensional information (15) of the object from the piece of useful information (4) filtered by the criteria (7), providing a recognition state file (16) and providing optimal criteria (7’); updating (E1), (E2), (E31), (E32), (E4) in the knowledge base (1) the criteria (2), (3), (6), (7) and the grid (5) using the optimal criteria (2'), (3'), (6'), (7') and the optimal grid (5’) or replacing the criteria (2), (3), (6), (7) and the grid (5). The method can be used for identifying objects of interest in the management of risks and performance in the industrial, medical, security and defence domains. 

IPC Classes  ?

  • G06T 17/00 - 3D modelling for computer graphics
  • G06T 7/00 - Image analysis
  • G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
  • G01B 17/06 - Measuring arrangements characterised by the use of infrasonic, sonic, or ultrasonic vibrations for measuring contours or curvatures
  • G06K 9/54 - Combinations of preprocessing functions