Disclosed are a system and method for semiconductor placement using a generalized model. According to the present invention, new semiconductor data is placed by using a generalized model trained on the basis of other semiconductor design data such that the time for training and placement may be shortened, and a plurality of pieces of learning data are generated and provided by using data augmentation, and thus, a sufficient amount of learning data for training the generalized model may be secured.
G06F 30/392 - Conception de plans ou d’agencements, p. ex. partitionnement ou positionnement
G06F 30/398 - Vérification ou optimisation de la conception, p. ex. par vérification des règles de conception [DRC], vérification de correspondance entre géométrie et schéma [LVS] ou par les méthodes à éléments finis [MEF]
G06F 30/327 - Synthèse logiqueSynthèse de comportement, p. ex. logique de correspondance, langage de description de matériel [HDL] à liste d’interconnections [Netlist], langage de haut niveau à langage de transfert entre registres [RTL] ou liste d’interconnections [Netlist]
G06F 30/27 - Optimisation, vérification ou simulation de l’objet conçu utilisant l’apprentissage automatique, p. ex. l’intelligence artificielle, les réseaux neuronaux, les machines à support de vecteur [MSV] ou l’apprentissage d’un modèle
H01L 27/02 - Dispositifs consistant en une pluralité de composants semi-conducteurs ou d'autres composants à l'état solide formés dans ou sur un substrat commun comprenant des éléments de circuit passif intégrés avec au moins une barrière de potentiel ou une barrière de surface
Disclosed are a design system and method for optimizing area and macro arrangement on the basis of reinforcement learning. The present invention can perform an optimal arrangement reflecting a block area by utilizing a hierarchical structure in arranging standard cells, ports, and macros of an integrated circuit, and can provide an optimized arrangement design for power, performance, and area, which are three main functions of the integrated circuit.
G06F 30/392 - Conception de plans ou d’agencements, p. ex. partitionnement ou positionnement
G06F 30/398 - Vérification ou optimisation de la conception, p. ex. par vérification des règles de conception [DRC], vérification de correspondance entre géométrie et schéma [LVS] ou par les méthodes à éléments finis [MEF]
G06F 30/327 - Synthèse logiqueSynthèse de comportement, p. ex. logique de correspondance, langage de description de matériel [HDL] à liste d’interconnections [Netlist], langage de haut niveau à langage de transfert entre registres [RTL] ou liste d’interconnections [Netlist]
G06F 30/27 - Optimisation, vérification ou simulation de l’objet conçu utilisant l’apprentissage automatique, p. ex. l’intelligence artificielle, les réseaux neuronaux, les machines à support de vecteur [MSV] ou l’apprentissage d’un modèle
G06F 117/12 - Dimensionnement, p. ex. de transistors ou de portes
3.
DEEP REINFORCEMENT LEARNING-BASED INTEGRATED CIRCUIT DESIGN SYSTEM USING PARTITIONING AND DEEP REINFORCEMENT LEARNING-BASED INTEGRATED CIRCUIT DESIGN METHOD USING PARTITIONING
The present disclosure may provide parameterized hyperparameter partitioning in consideration of balance in partition size while preserving a property of a hypergraph necessary to apply deep reinforcement learning by reducing the large-size hypergraph, and may reduce the computational amount and capacity of an artificial neural network by reducing a graph.
The present invention may provide parameterized hyperparameter partitioning in consideration of the balance of a partition size while preserving a hypergraph property necessary for applying deep reinforcement learning by reducing a large size hypergraph, and can reduce the amount of calculation and capacity of an artificial neural network through graph reduction.
G06F 30/27 - Optimisation, vérification ou simulation de l’objet conçu utilisant l’apprentissage automatique, p. ex. l’intelligence artificielle, les réseaux neuronaux, les machines à support de vecteur [MSV] ou l’apprentissage d’un modèle
G06F 30/392 - Conception de plans ou d’agencements, p. ex. partitionnement ou positionnement
G06F 30/327 - Synthèse logiqueSynthèse de comportement, p. ex. logique de correspondance, langage de description de matériel [HDL] à liste d’interconnections [Netlist], langage de haut niveau à langage de transfert entre registres [RTL] ou liste d’interconnections [Netlist]
G06F 111/20 - CAO de configuration, p. ex. conception par assemblage ou positionnement de modules sélectionnés à partir de bibliothèques de modules préconçus
G06F 117/12 - Dimensionnement, p. ex. de transistors ou de portes
G06F 115/12 - Cartes de circuits imprimés [PCB] ou modules multi-puces [MCM]
5.
USER LEARNING ENVIRONMENT-BASED REINFORCEMENT LEARNING APPARATUS AND METHOD IN SEMICONDUCTOR DESIGN
A user learning environment-based reinforcement learning apparatus and method in a semiconductor design is disclosed. The present invention may allow, in a semiconductor design, a user to configure a learning environment and determine optimal positions of a semiconductor device and a standard cell through reinforcement learning using a simulation, and perform the reinforcement learning on the basis of the learning environment configured by the user, so as to automatically determine an optimized semiconductor device position in various environments.
G06F 30/392 - Conception de plans ou d’agencements, p. ex. partitionnement ou positionnement
G06F 30/398 - Vérification ou optimisation de la conception, p. ex. par vérification des règles de conception [DRC], vérification de correspondance entre géométrie et schéma [LVS] ou par les méthodes à éléments finis [MEF]
G06F 30/27 - Optimisation, vérification ou simulation de l’objet conçu utilisant l’apprentissage automatique, p. ex. l’intelligence artificielle, les réseaux neuronaux, les machines à support de vecteur [MSV] ou l’apprentissage d’un modèle
G06F 117/12 - Dimensionnement, p. ex. de transistors ou de portes
6.
REINFORCEMENT LEARNING APPARATUS AND METHOD FOR OPTIMIZING POSITION OF OBJECT BASED ON SEMICONDUCTOR DESIGN DATA
Disclosed are a reinforcement learning apparatus and method for optimizing a position of a semiconductor device based on semiconductor design data. The present invention may: configure a learning environment on the basis of semiconductor design data by a user; and provide an optimal position of a semiconductor device during a semiconductor design step via reinforcement learning using a simulation.
G06F 30/392 - Conception de plans ou d’agencements, p. ex. partitionnement ou positionnement
G06F 30/27 - Optimisation, vérification ou simulation de l’objet conçu utilisant l’apprentissage automatique, p. ex. l’intelligence artificielle, les réseaux neuronaux, les machines à support de vecteur [MSV] ou l’apprentissage d’un modèle
G06F 111/06 - Optimisation multi-objectif, p. ex. optimisation de Pareto utilisant le recuit simulé, les algorithmes de colonies de fourmis ou les algorithmes génétiques
G06F 117/12 - Dimensionnement, p. ex. de transistors ou de portes
7.
APPARATUS AND METHOD FOR REINFORCEMENT LEARNING BASED ON USER LEARNING ENVIRONMENT IN SEMICONDUCTOR DESIGN
Disclosed are an apparatus and a method for reinforcement learning based on a user learning environment in semiconductor design. According to the present disclosure, a user may configure a learning environment in semiconductor design and may determine optimal positions of semiconductor elements and standard cells through reinforcement learning using simulation, and reinforcement learning may be performed based on the learning environment configured by the user, thereby automatically determining optimized semiconductor element positions in various environments.
G06F 30/327 - Synthèse logiqueSynthèse de comportement, p. ex. logique de correspondance, langage de description de matériel [HDL] à liste d’interconnections [Netlist], langage de haut niveau à langage de transfert entre registres [RTL] ou liste d’interconnections [Netlist]
G06F 30/3308 - Vérification de la conception, p. ex. simulation fonctionnelle ou vérification du modèle par simulation
8.
APPARATUS AND METHOD FOR REINFORCEMENT LEARNING FOR OBJECT POSITION OPTIMIZATION BASED ON SEMICONDUCTOR DESIGN DATA
Disclosed are an apparatus and a method for reinforcement learning for semiconductor element position optimization based on semiconductor design data. According to the present disclosure, a learning environment may be constructed based on a user's semiconductor design data such that optimal positions of semiconductor elements are provided during a semiconductor design process through reinforcement learning using simulation.
G06F 30/27 - Optimisation, vérification ou simulation de l’objet conçu utilisant l’apprentissage automatique, p. ex. l’intelligence artificielle, les réseaux neuronaux, les machines à support de vecteur [MSV] ou l’apprentissage d’un modèle
G06F 18/21 - Conception ou mise en place de systèmes ou de techniquesExtraction de caractéristiques dans l'espace des caractéristiquesSéparation aveugle de sources
G06F 30/18 - Conception de réseaux, p. ex. conception basée sur les aspects topologiques ou d’interconnexion des systèmes d’approvisionnement en eau, électricité ou gaz, de tuyauterie, de chauffage, ventilation et climatisation [CVC], ou de systèmes de câblage
9.
Reinforcement learning device and method using conditional episode configuration
Disclosed are a reinforcement learning device and method using a conditional episode configuration. The present invention imparts conditions on individual decision making, and terminates an episode if the imparted conditions are not met, thereby maximizing the total sum of rewards reflecting the current values. Accordingly, reinforcement learning can be easily applied even to problems using a non-continuous state.
Disclosed are a reinforcement learning apparatus and a reinforcement learning method for optimizing the position of an object based on design data. The present disclosure may configure a learning environment based on design data of a user and generate the optimal position of a target object, installed around a specific object during a design or manufacturing process, through reinforcement learning using simulation.
G06F 30/3308 - Vérification de la conception, p. ex. simulation fonctionnelle ou vérification du modèle par simulation
G06F 30/27 - Optimisation, vérification ou simulation de l’objet conçu utilisant l’apprentissage automatique, p. ex. l’intelligence artificielle, les réseaux neuronaux, les machines à support de vecteur [MSV] ou l’apprentissage d’un modèle
G06F 30/347 - Niveau physique , p. ex. positionnement ou routage
11.
REINFORCEMENT LEARNING APPARATUS AND METHOD BASED ON USER LEARNING ENVIRONMENT
Disclosed is a user learning environment-based reinforcement learning apparatus and method. According to the disclosure, a CAD data based-reinforcement learning environment may be easily set by a user using a user interface (UI) and a drag and drop, a reinforcement learning environment may be promptly configured, and reinforcement learning may be performed based on the learning environment set by the user, and thus the optimized location of a target object may be automatically produced in various environments.
G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
G06F 30/27 - Optimisation, vérification ou simulation de l’objet conçu utilisant l’apprentissage automatique, p. ex. l’intelligence artificielle, les réseaux neuronaux, les machines à support de vecteur [MSV] ou l’apprentissage d’un modèle
12.
REINFORCEMENT LEARNING APPARATUS AND METHOD FOR OPTIMIZING LOCATION OF OBJECT ON BASIS OF DESIGN DATA
Disclosed are a reinforcement learning apparatus and method for optimizing a location of an object on the basis of design data. The present invention may: configure a learning environment on the basis of design data by a user; and generate, via reinforcement learning using simulation during a design or manufacturing step, an optimal location of a target object being installed around a specific object.
G06Q 10/04 - Prévision ou optimisation spécialement adaptées à des fins administratives ou de gestion, p. ex. programmation linéaire ou "problème d’optimisation des stocks"
G06Q 10/06 - Ressources, gestion de tâches, des ressources humaines ou de projetsPlanification d’entreprise ou d’organisationModélisation d’entreprise ou d’organisation
G06F 30/27 - Optimisation, vérification ou simulation de l’objet conçu utilisant l’apprentissage automatique, p. ex. l’intelligence artificielle, les réseaux neuronaux, les machines à support de vecteur [MSV] ou l’apprentissage d’un modèle
G06F 111/06 - Optimisation multi-objectif, p. ex. optimisation de Pareto utilisant le recuit simulé, les algorithmes de colonies de fourmis ou les algorithmes génétiques
13.
DEVICE AND METHOD FOR REINFORCEMENT LEARNING BASED ON USER LEARNING ENVIRONMENT
Disclosed is a device and method for reinforcement learning based on a user learning environment. According to the present invention, a user can easily configure a CAD data-based reinforcement learning environment through a user interface (UI) and drag & drop, quickly configure the reinforcement learning environment, and perform reinforcement learning on the basis of the learning environment configured by the user, thereby automatically generating a position of a target object optimized in various environments.
G06Q 10/04 - Prévision ou optimisation spécialement adaptées à des fins administratives ou de gestion, p. ex. programmation linéaire ou "problème d’optimisation des stocks"
G06Q 10/06 - Ressources, gestion de tâches, des ressources humaines ou de projetsPlanification d’entreprise ou d’organisationModélisation d’entreprise ou d’organisation
G06F 30/27 - Optimisation, vérification ou simulation de l’objet conçu utilisant l’apprentissage automatique, p. ex. l’intelligence artificielle, les réseaux neuronaux, les machines à support de vecteur [MSV] ou l’apprentissage d’un modèle
G06F 111/06 - Optimisation multi-objectif, p. ex. optimisation de Pareto utilisant le recuit simulé, les algorithmes de colonies de fourmis ou les algorithmes génétiques
14.
DEEP REINFORCEMENT LEARNING APPARATUS AND METHOD FOR PICK-AND-PLACE SYSTEM
Disclosed is a deep reinforcement learning apparatus and method for a pick-and-place system. According to the present disclosure, a simulation learning framework is configured to apply reinforcement learning to make pick-and-place decisions using a robot operating system (ROS) in a real-time environment, thereby generating stable path motion that meets various hardware and real-time constraints.
Disclosed is a device for data-based reinforcement learning. The disclosure allows an agent to learn a reinforcement learning model so as to maximize a reward for an action selectable according to a current state in a random environment, wherein a difference between a total variation rate and an individual variation rate for each action is provided as a reward for the agent.
A generative adversarial network-based classification system and method that can generate missing data as missing data imputation values similar to real data using a generative adversarial network (GAN) and allowing training with labeled data sets with labels, as well as and irregular data sets such as non-labeled data sets without labels.
Disclosed is a decision-making agent having a hierarchical structure. The present invention allows a user without knowledge about reinforcement learning to learn by easily setting and applying core factors of the reinforcement learning to business problems.
G06Q 10/06 - Ressources, gestion de tâches, des ressources humaines ou de projetsPlanification d’entreprise ou d’organisationModélisation d’entreprise ou d’organisation
G06N 5/04 - Modèles d’inférence ou de raisonnement
18.
DECISION-MAKING AGENT GENERATING APPARATUS AND METHOD
Disclosed are a decision-making agent generating apparatus and method. The decision-making agent generating apparatus according to the present disclosure comprises: a training agent unit (100) for generating an arbitrary reinforcement learning target model on the basis of input data regarding a business domain and training same, wherein the reinforcement learning target model is trained by reflecting user setting data to the training of the reinforcement learning target model; and a deploy agent unit (200) for deploying the reinforcement learning target model generated by the training agent unit (100). When generation of an optimization and automation model related to decision-making of a company is requested, the present invention can generate and provide a model therefor.
Disclosed are an OCR-based document analysis system and method using a virtual cell. According to the present invention, letters including characters and numbers described in items on a document may be recognized, and a virtual cell may be generated on the basis of relative positions of the recognized letters to match relative position information with respect to the numbers.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
Disclosed are a reinforcement learning device and method using a conditional episode configuration. The present invention imparts conditions on individual decision making, and terminates an episode if the imparted conditions are not met, thereby maximizing the total sum of rewards reflecting the current values. Accordingly, reinforcement learning can be easily applied even to problems using a non-continuous state.
An OCR-based document analysis system and method are disclosed. The present invention provides confidence scores for relative location information about characters recognized on the basis of OCR recognition, for the connection between recognized items, and for recognized information, and thus can reduce re-configuring of a data table and the time required for verification of a prediction accuracy inspector.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
The present invention relates to a deep-learning based system and method of automatically determining a degree of damage to each area of a vehicle, which is capable of quickly calculating a consistent and reliable quote for vehicle repair by analyzing an image of a vehicle in an accident by using a deep learning-based Mark R-CNN framework and then extracting a component image corresponding to a damaged part, and automatically determining the degree of damage in the extracted component image based on a pre-trained model.
G06F 18/214 - Génération de motifs d'entraînementProcédés de Bootstrapping, p. ex. ”bagging” ou ”boosting”
G06F 18/21 - Conception ou mise en place de systèmes ou de techniquesExtraction de caractéristiques dans l'espace des caractéristiquesSéparation aveugle de sources
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p. ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersectionsAnalyse de connectivité, p. ex. de composantes connectées
Disclosed are a system and method for analyzing damage to a vehicle. The present invention can analyze a damage type and damage degree for each component by using an accident image of a vehicle, and provide objective replacement and repair information of a component, according to an analysis result.
Disclosed are generative adversarial network-based classification system and method. The present invention can generate missing data as missing data imputation values similar to real data using a generative adversarial network (GAN), thus allowing the overall quality of the data to be improved, and allowing training with labeled data sets with labels, as well as irregular data sets such as non-labeled data sets without labels.
Disclosed are reinforcement learning-based fraudulent loan classification system and method. The present invention improves classification of fraudulent vehicle loans using reinforcement learning, thereby allowing the occurrence of predicted loss due to fraudulent loans to be minimized.
Disclosed is a device for data-based enhanced learning. The present invention allows an agent to learn an enhanced learning model so as to maximize a reward for an action selectable according to a current state in a random environment, wherein a difference between a total variation rate and an individual variation rate for each action is provided as a reward for the agent.
The present invention relates to a deep learning-based system and method for automatically determining the degree of damage to each area of a vehicle, which enables a user to quickly calculate a consistent and reliable estimate for vehicle repair by: analyzing an image of a vehicle in an accident by using a deep learning-based Mask R-CNN framework; extracting a component image corresponding to a damaged part of the vehicle; and automatically determining the degree of damage in the extracted component image on the basis of a pre-trained model.
G06F 16/50 - Recherche d’informationsStructures de bases de données à cet effetStructures de systèmes de fichiers à cet effet de données d’images fixes
The present invention relates to a model training method and system for automatically determining a damage level of each of vehicle parts on the basis of deep learning, wherein a model capable of rapidly calculating a coherent and reliable vehicle repair estimate is generated through training of a damage level according to damage types, and training allowing the model to automatically extract a photo through which a damage level can be determined, among photos of an accident vehicle by using a deep learning-based mask R-CNN framework and an inception V4 network structure.