A computer-implemented method comprising: receiving a first input associated with a first entity at a first level of a hierarchy; receiving a second input, associated with a second entity at a second level of the hierarchy, the second entity linked to the first entity within the hierarchy; generating a first low-dimensional feature representation based on the first input, the first low-dimensional feature representation representing the first entity; and generating a second low-dimensional feature representation based on the first input, the second input and the first low-dimensional feature representation, the second low-dimensional feature representation representing the second entity.
2.
SYSTEMS AND METHODS FOR MICRO-REACTOR POWER GENERATION IN DATACENTERS
A datacenter power system may include a co-location including a plurality of computing devices. A datacenter power system may include a micro-plant in electrical communication with the co-location to supply micro-plant electrical power to the co-location, the micro-plant including: a plurality of micro-reactors, wherein each micro-reactor is configured to produce thermal energy, and at least one generator in thermal communication with the plurality of micro-reactors configured to convert at least a portion of the thermal energy to the micro-plant electrical power.
A computing system (10) monitors inference conditions of a generative model (74), detects a predetermined trigger condition (48) among the monitored inference conditions, and responsive to detecting the predetermined trigger condition (48), consolidates a memory vector database (76) of the generative model (74) to thereby extract semantic memories (66) from the vector database (76), updates the generative model (74) using the extracted semantic memories (66), and deploys the consolidated generative model (74). The predetermined trigger condition (48) may be at least one of a database size condition, an available memory size condition, a processor load condition, or a scheduled time condition.
Timestamps are used to prevent data tampering. A trusted data generator generates a concatenated data object including a data object and metadata, which includes a location indication, a timestamp, and, depending on the type of encryption, an integrity value. The trusted timestamp generator encrypts the concatenated data object, stores the encrypted concatenated data object (e.g., in untrusted storage) in accordance with a location indicated by the location indication, and protects the timestamp in trusted storage (e.g., by protecting at least the root timestamp). A trusted data validator extracts the data object and metadata from the decrypted concatenated data object and validates the data object by comparing the storage location to the extracted location indication, the protected timestamp to the extracted timestamp, and a calculated integrity value to the extracted integrity value. Timestamps may be validated without calculations and with tolerances, supporting performance customization/security optimization.
5.
CONTAINER MODE MANAGEMENT ENGINE IN A SECURITY MANAGEMENT SYSTEM
Methods, systems, and computer storage media for providing container secure computing modes using a container mode management engine of a security management system. A container secure computing mode can include a secure state in which a container operates to prioritize security measures and practices. A container secure computing mode can be assigned to a container instance and enforced via a container security agent. In operation, a container instance is initialized, the container instance is associated with a container security agent having a secure compute mode transition control for the container instance. Based on the secure compute mode transition control, the container instance is transitioned into a secure state. A container operation of the container instance is accessed. The execution of the container operation is restricted based on the secure state of the container instance. The secure state is associated with a secure state configuration that supports restricting the container operation.
6.
RESOURCE-BASED ASSIGNMENT OF BEHAVIOR MODELS TO AUTONOMOUS AGENTS
The disclosed concepts relate to employing agent behavior models to control agent behavior in an application, such as a video game or a simulation. For instance, in some implementations, agent behavior models with relatively greater resource utilization, such as generative language models, are assigned to agents that are at higher levels of an agent hierarchy. Agent behavior models with relatively less resource utilization, such as reinforcement learning or hard-coded models, are assigned to agents that are at lower levels of the agent hierarchy.
G06N 3/006 - Vie artificielle, c.-à-d. agencements informatiques simulant la vie fondés sur des formes de vie individuelles ou collectives simulées et virtuelles, p. ex. simulations sociales ou optimisation par essaims particulaires [PSO]
7.
CONTENT-AWARE ARTIFICIAL INTELLIGENCE GENERATED FRAMES FOR DIGITAL IMAGES
A data processing system implements receiving an image and a natural language prompt input by a user requesting that an application generate an digital picture frame for the image; analyzing the prompt using a key-phrase extraction unit to extract one or more key phrases from the prompt that describe a topic of the frame to be generated for the image; providing the one or more key phrases as an input to a retrieval engine; analyzing the one or more key phrases with the retrieval engine to identify a set of candidate frame images from among a plurality of frame images in a labeled frame images datastore; analyzing the set of candidate frame images using an image placement unit to obtain a set of framed images based on the image and the candidate frame images; and presenting the set of framed images on a user interface of the application.
G06F 16/58 - Recherche caractérisée par l’utilisation de métadonnées, p. ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement
An enhanced router is described that improves network performance for AI workloads by providing in-network primitives that improve the performance of operations such as Broadcast and Reduce operations. The enhanced router is configured to execute primitives for data payloads in packets associated with the workloads in the network prior to forwarding the packets to hosts. The network device receives, via a control plane, a primitive indicative of an analytical, computational, or transformative operation to be performed on data payloads transmitted by data packets associated with a workload being processed in an SDN. The primitive is associated with a protocol for configuring network devices to perform in-network acceleration of workloads in coordination with source and destination hosts in the SDN.
H04L 41/0895 - Configuration de réseaux ou d’éléments virtualisés, p. ex. fonction réseau virtualisée ou des éléments du protocole OpenFlow
H04L 41/0896 - Gestion de la bande passante ou de la capacité des réseaux, c.-à-d. augmentation ou diminution automatique des capacités
H04L 41/16 - Dispositions pour la maintenance, l’administration ou la gestion des réseaux de commutation de données, p. ex. des réseaux de commutation de paquets en utilisant l'apprentissage automatique ou l'intelligence artificielle
H04L 69/22 - Analyse syntaxique ou évaluation d’en-têtes
A communications network utilizes dynamic service models to facilitate programmability within a radio access network. An analysis node is configured to execute an analysis application based on information from a virtual network function. The virtual network function configured to provide a dynamic service model that defines one or more hook points within instructions for operating the virtual network function and one or more parameters that can be accessed by a codelet at the hook point. The virtual network function dynamically receives a codelet from the analysis node. The virtual network function verifies that the codelet complies with the dynamic service model. The virtual network function executes the codelet at one of the hook points.
Disclosed are techniques for synthesizing large amounts of human-computer interaction data that is representative of real-world user data. An automated screenshot capture engine may cause an automated agent to use an application or a website in a manner designed to mimic real-world human-computer interaction. Screenshots are captured to record how a user might interact with the application. Metadata, such as window location and size, may be obtained for each screenshot. Screenshots and corresponding metadata may be automatically annotated with a large language model to indicate the context of the application and/or computer system when the screenshot was captured. Data created in this way may be used to validate AI-based software application features or to train (or retrain) a machine learning model that predicts human-computer interactions. Automated synthesis of training data significantly increases the scale of data that can be obtained for training while also reducing computing and financial costs.
G06V 10/774 - Génération d'ensembles de motifs de formationTraitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source méthodes de Bootstrap, p. ex. "bagging” ou “boosting”
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 10/98 - Détection ou correction d’erreurs, p. ex. en effectuant une deuxième exploration du motif ou par intervention humaineÉvaluation de la qualité des motifs acquis
G06V 20/62 - Texte, p. ex. plaques d’immatriculation, textes superposés ou légendes des images de télévision
G06V 30/146 - Alignement ou centrage du capteur d’image ou du champ d’image
A heat exchanger comprising a heatsink and/or coldplate is disposed on a semiconductor having a heat-producing die within. A layer of thermal interface material (TIM) is disposed between the heat exchanger and semiconductor to enhance heat dissipation as the semiconductor is operated. A seal including a gasket or edgebond adhesive is provided around the perimeter edges of the heat exchanger and semiconductor to seal the gap around the periphery of the TIM layer to prevent the TIM from getting pumped out with cyclical thermal loading of the assembly. A capillary tube in the heat exchanger extending from the internal TIM layer to an opening exposed to the surrounding environment provides a reservoir to capture TIM that would otherwise be pumped out. Dimensions of the capillary tube are selected to prevent environmental air from passing by the TIM in the tube and getting entrapped in the TIM layer as voids.
H01L 23/10 - ConteneursScellements caractérisés par le matériau ou par la disposition des scellements entre les parties, p. ex. entre le couvercle et la base ou entre les connexions et les parois du conteneur
H01L 23/42 - Choix ou disposition de matériaux de remplissage ou de pièces auxiliaires dans le conteneur pour faciliter le chauffage ou le refroidissement
H01L 23/367 - Refroidissement facilité par la forme du dispositif
H01L 23/473 - Dispositions pour le refroidissement, le chauffage, la ventilation ou la compensation de la température impliquant le transfert de chaleur par des fluides en circulation par une circulation de liquides
12.
ADAPTIVE QUERY ROUTING FOR NATURAL LANGUAGE GENERATORS BASED ON QUERY DIFFICULTY
Natural language generators (NLGs), including large language models, are powerful technologies that are in widespread use. However, typically, as NLGs become more powerful and sophisticated, their correspondingly increased complexity requires substantial processing resources. The present disclosure provides automated techniques for dynamically routing queries between at least two NLGs based on an assessment of query difficulty. Less difficult queries can be routed to a less resource intensive NLG, while more difficult queries are routed to a more sophisticated, but more resource intensive NLG. Routing less difficult queries to a less resource intensive model can thus conserve computing resources, while providing little to no drop in response quality, and in some cases providing improved response quality.
A system is configurable to access a precomputed topology associated with a mesh that comprises a plurality of object components. The precomputed topology defines a plurality of object component groups that each comprise a respective set of object components of the mesh. The system is configurable to determine a traversal likelihood metric associated with the mesh that indicates a likelihood that rays of a ray trace operation will traverse acceleration structure nodes representing object components of the mesh, and use the plurality of object component groups as inputs to construct an acceleration structure. When the traversal likelihood metric satisfies a threshold, leaf nodes of at least one intermediate node of the acceleration structure each comprise a respective object component of an object component group. When the traversal likelihood metric fails to satisfy the threshold, at least one leaf node of the acceleration structure comprises an object component group.
Bidirectional flows of a communication session in a software defined network (SDN) are efficiently managed. A smart switch comprises a digital processing unit (DPU) complex comprising one or more DPUs, and a switching complex comprising one or more network processing units (NPUs). The DPU complex is configured to disaggregate enforcement of policies of the SDN from hosts of the SDN. The switching complex is configured to perform network routing of packets in the SDN. The hosts are implemented on servers communicatively coupled to network interfaces of the SDN. The switching complex is configured to perform policy enforcement of data flows for communication sessions that are offloaded from the DPU complex to the switching complex.
Methods, systems, and computer storage media for providing workload management using a workload management engine in an artificial intelligence (AI) system. In particular, workload management incorporates adaptive strategies that adjust the neural network models employed by a processing unit (e.g., NPU/GPU/TPU) based on the dynamic nature of workloads, workload management factors, and workload management logic. The workload management engine provides the workload management logic to support strategic decision-making for processor optimization. In operation, a plurality states of workload management factors are identified. A task associated with a workload processing unit is identified. Based on the task and the plurality of states of the workload processing unit, a neural network model from a plurality of neural network models is selected. The plurality of neural network models include a full neural network model and a reduced neural network model. The task is caused to be executed using the identified neural network model.
A method for securely providing a remote desktop session includes receiving, at a user device, an encrypted video stream that includes graphics content of the remote desktop session and that is characterized by a frame rate that is variable. The method further provides for reducing variability in the frame rate of the encrypted video stream by duplicating select encrypted frames of the video stream and inserting the duplicated encrypted frames into the video stream. The method additionally provides for delivering the video stream to a local application configured to generate control signals that cause a graphics processing unit (GPU) of the user machine to render the video stream to a display of the user machine.
H04N 21/254 - Gestion au sein du serveur de données additionnelles, p. ex. serveur d'achat ou serveur de gestion de droits
H04N 21/4405 - Traitement de flux élémentaires vidéo, p. ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène du flux vidéo codé impliquant le décryptage de flux vidéo
A computer implemented method comprising: obtaining a simulated input image simulating a second imaging modality based on a source image in a first imaging modality; inputting the simulated input image into a first machine learning model trained based on simulated training images in the second imaging modality, thereby generating a latent representation of the simulated input image; and causing the latent representation to be input into a second machine learning model trained based on empirical training images in the second image modality, thereby resulting in the second machine learning model generating a synthesized output image in the second modality.
Topological devices with asymmetric junction(s) are described. An example topological device (100) includes a superconducting wire (112) comprising a first segment (114) and a second segment (116), where the first segment (114) is configurable to be in a trivial phase and the second segment (116) is configurable to be in a topological phase. The topological device further includes an asymmetric junction (182), at an interface of the first segment (114) and the second segment (116). The asymmetric junction (182) is operable to couple a Majorana zero mode, MZM, in the second segment (116) to a quantum dot (172) or a transport lead (153) such that the asymmetric junction (182) increases strength of a coupling between the MZM and the quantum dot (172) or the transport lead (153) while reducing strength of a coupling between any states formed in the first segment (114) of the superconducting wire (112) and the quantum dot (172) or the transport lead (153).
G06N 10/40 - Réalisations ou architectures physiques de processeurs ou de composants quantiques pour la manipulation de qubits, p. ex. couplage ou commande de qubit
19.
EXPEDITING GENERATIVE TOKEN PRODUCTION USING SPECULATIVE SAMPLING, ADDED GUIDANCE, AND LANGUAGE MODELS OF DIFFERENT CAPACITIES
A technique accelerates the generative production of tokens using a target language model that operates in cooperation with a draft language model. The target language model is more capable, but slower, compared to the draft language model. In operation, the draft language model transforms prompt tokens into draft tokens. The target language model edits the draft tokens, e.g., by selecting zero, one, or more of the draft tokens, and by also predicting a next token to follow the draft token(s) (if any) that are selected. Further, the target language model produces guidance vector information. In a subsequent cycle, the draft language model uses the guidance vector information to produce an updated set of set of draft tokens. The guidance vector information informs the draft language model of the embedding space being used by the target language model. This achieves a more effective cooperative relation between the two models.
Example implementations include a method, apparatus, and computer-readable medium configured for implementing a workflow using a large language model (LLM). A workflow automation application sends a first prompt to a large language model (LLM) to transform a first input data source in a first format to a second format. The workflow automation application sends a second prompt to the LLM to define multiple steps of a workflow starting on the data source in the second format. The workflow automation application sends a third prompt to the LLM to define execution of business logic for each step of the workflow. The workflow automation application receives, from the LLM, output data indicating that each step of the workflow has been executed.
In certain embodiments, a time series-based anomaly detection method is provided, which is able to identify anomalous user accounts highly effectively. An activity predictor is used to model normal behaviors of individual accounts and to assess an extent to which a current behavior associated with differs from its past normal behavior. Part of an activity sequence is inputted to the activity predictor, and a resulting activity prediction (the activity predictor's prediction of normal behavior) is compared with the remaining part of the sequence. In preferred embodiments, a multi-stage approach is used, with a more lightweight form of anomaly detection applied in a first stage, and the time-series based detection performed in a second stage only on a subset of activity sequences escalated from the first stage.
This patent relates to hinged devices, such as computing devices. One example includes a first portion including a first input/output device and a second portion including a second input/output device. A hinge assembly includes a flexible hinge that removably couples the first and second portions and allows relative rotation between the first and second portions. The flexible hinge is biased into the first portion to reduce a percentage of the flexible hinge exposed between the first and second portions at a given rotational or angular orientation of the first and second portions.
Disclosed is a semiconductor-superconductor hybrid structure (10), particularly for topological quantum computing, which includes a substrate (12), a buffer region (14) having a superlattice sub-region (24) over the substrate and a graded lattice sub-region (26) over the superlattice sub-region, an active region (16) over the buffer region, a superconductor (18) consisting of one or more patterned nanowires over the active region, and a cap layer (20) encapsulating the superconductor and top surface portions of the active region not covered by the superconductor. The active region covers an entire top surface of the buffer region, is configured to quantum confine electrons, and has a top barrier layer (34) configured to tune coupling between the superconductor and the active region. The superlattice sub-region is configured to prevent impurity diffusion and crystalline defects propagating from the substrate to the active region, while the graded lattice sub-region is configured to provide a lattice constant transition between the substrate and the active region.
H10D 62/81 - Corps semi-conducteurs, ou régions de ceux-ci, de dispositifs ayant des barrières de potentiel caractérisés par les matériaux de structures présentant des effets de confinement quantique, p. ex. des puits quantiques uniquesCorps semi-conducteurs, ou régions de ceux-ci, de dispositifs ayant des barrières de potentiel caractérisés par les matériaux de structures présentant une variation de potentiel périodique ou quasi-périodique
H10D 48/00 - Dispositifs individuels non couverts par les groupes
H01L 21/02 - Fabrication ou traitement des dispositifs à semi-conducteurs ou de leurs parties constitutives
This disclosure describes utilizing a generative document system to dynamically build and provide generative search result documents. The generative document system utilizes an aggregated framework that leverages one or more large generative models (LGMs). For example, the aggregated framework includes three stages where local processes are applied to generative outputs from LGMs, with each stage building upon the generative inputs from previous stages. The generative document system uses the aggregated framework to create generative search result documents based on search queries and their corresponding search result links. These generative search result documents provide interactive, intuitive, comprehensive, and flexible curation of answers that address the respective search queries.
A computing device assembly (100) is provided, including a rack (10), and a plurality of compute units (12) that are horizontally oriented and mounted within the rack (10) in one of two vertical stacks (12A, 12B). The computing device assembly (100) further includes a plurality of switches (16) that are vertically oriented and mounted along a front side (24) of the rack (10) laterally between the two vertical stacks (12A, 12B) of compute units (12). The computing device assembly (100) further includes a plurality of horizontal cable backplanes (14) mounted in a vertical stack along a rear side (22) of the rack (10). The computing device assembly (100) further includes a plurality of vertical cable shuffles (20) mounted between the two vertical stacks (12A, 12B) of compute units (12) and between the vertically oriented switches (16) and the vertical stack of horizontal cable backplanes (14).
Disclosed herein is a system for implementing a management controller on a node, or network server, that is dedicated to monitoring the individual health of a plurality of accelerator modules configured on the node. Based on the monitored health, the management controller is configured to implement autonomous power cycle control of individual accelerator modules. The autonomous power cycle control is implemented without violating the requirements of standards established for accelerator modules (e.g., OPEN COMPUTE PROJECT requirements, PERIPHERAL COMPONENT INTERCONNECT EXPRESS (PCIe) interface requirements).
Systems and techniques for facilitating unified multichannel communication are provided. The described systems and techniques improve communication technology through an encompassing, channel-agnostic approach which unifies disparate communication modes into a singular coherent thread. A unified multichannel communication ("UMC") service of a UMC platform can initialize a UMC thread for a UMC session, where the UMC thread can be used to facilitate unified multichannel communication. The UMC session can involve multiple participants, including human users and software agents (e.g., conversational bots, virtual agents, digital assistants, and other dialog interfaces). The UMC platform can facilitate creating and interacting with a digital assistant providing unified multichannel communication.
H04L 51/02 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p. ex. courriel en utilisant des réactions automatiques ou la délégation par l’utilisateur, p. ex. des réponses automatiques ou des messages générés par un agent conversationnel
H04L 51/216 - Gestion de l'historique des conversations, p. ex. regroupement de messages dans des sessions ou des fils de conversation
H04L 51/56 - Messagerie unifiée, p. ex. interactions entre courriel, messagerie instantanée ou messagerie IP convergente [CPM]
28.
SYSTEM AND METHOD FOR SPEECH LANGUAGE IDENTIFICATION
A method, computer program product, and computing system for speech language identification. An input speech signal is received in a particular language. The input speech signal is processed by a plurality of speech recognition processing paths, each speech recognition processing path to recognize an associated subset of the languages. Processing, with each of the speech recognition processing paths, the input speech signal using machine learning to identify a language which is a closest match to the particular language of the input speech signal, resulting in a plurality of identified languages. The input speech signal and an indication of each of the plurality of identified languages is received in a further speech recognition processing path. The input speech signal is processed, using machine learning, to recognize one of the identified languages as a closest match to the particular language.
A computing system including one or more processing devices configured to identify one or more severe hook faults in a stabilizer channel. Identifying the severe hook faults includes receiving a circuit channel check matrix, the columns of which indicate values of checks associated with elementary faults of the stabilizer channel. Identifying the severe hook faults further includes receiving a phenomenological channel check matrix as a sub-matrix of the circuit channel check matrix, receiving a logical effect matrix, and receiving a weight vector that indicates probability weights of the elementary faults. Based at least in part on the circuit channel check matrix, the logical effect matrix, the phenomenological channel check matrix, and the weight vector, identifying the severe hook faults further includes computing column indices of columns of the circuit channel check matrix that correspond to the severe hook faults. The processing devices output an indication of the severe hook faults.
A method for user intent evaluation includes receiving recorded speech (306) of a human user (104). One or more attention indicators (406) are detected in an image (400) of the human user (104). Using a trained command recognition model (504), a command confidence (506) is estimated indicating a confidence that the recorded human speech (306) includes a command for a smart assistant computing system (100). Based at least in part on detecting the one or more attention indicators (406), and the command confidence (506) exceeding a command confidence threshold, the human user (104) is classified as intending to interact with the smart assistant computing system (100).
A signal conditioning connector assembly (50) is provided, including an enclosure (52), and a plurality of signal conditioner layers (68) mounted within the enclosure (52). Each signal conditioner layer (68) includes a substrate (66), signal conditioner circuitry (60) mounted to the substrate (66), first electrodes (64) forming a first connector on a first side of the signal conditioner circuitry (60), second electrodes (65) forming a second connector on a second side of the signal conditioner circuitry (60), a heat spreader (62) in thermal communication with a side of the signal conditioner circuitry (60) opposite the substrate (66), and a liquid cooling pipe (54) positioned adjacent and in thermal communication with the heat spreader (62). The liquid cooling pipe (54) is configured to draw heat away from the heat spreader (62) for thermal management. The signal conditioning connector assembly (50) can be positioned adjacent an interface between the vertical cable shuffle (20) and the horizontal cable backplane (14) within the rack (10) of the computing device assembly (100) of the first and second aspects.
Large language models (LLMs) and visual-language models (VLMs) are able to provide robust results based on specified formatting and organization. Although LLMs and VLMs are designed to receive natural language input, users often lack the skill, knowledge, or patience to utilize LLMs and VLMs to their full potential. By leveraging screen understanding, AI prompts (or "pills") may automatically be generated for artificial-intelligence (AI) assistance and query resolution in a VLM/LLM environment. Using an image encoder, a current screenshot is processed into an image embedding and compared to text embeddings representing screenshot activities. By identifying the text embedding having the closest similarity to the image embedding, a screen activity being performed by the user may be determined. Suggested AI prompts (or "pills") may then be generated in real-time to assist the user in performing the screen activity.
Systems and techniques for facilitating unified multichannel communication are provided. The described systems and techniques improve communication technology through an encompassing, channel-agnostic approach which unifies disparate communication modes into a singular coherent thread. A unified multichannel communication ("UMC") service of a UMC platform can initialize a UMC thread for a UMC session, where the UMC thread can be used to facilitate unified multichannel communication.
This document relates to providing adaptive teleconferencing experiences using generative image models. For example, the disclosed implementations can employ inpainting and/or image-to-image restyling modes of a generative image model to generate images for a teleconference. The images can be generated based on prompts relating to the teleconference. Users can be superimposed on the generated images, thus giving the appearance that the users are present in an environment generated by the generative image model.
A system, method, and computer-readable media for executing applications for radio interface controller (RIC) management are disclosed. The system includes far-edge datacenters configured to execute a radio access network (RAN) function and a real-time RIC; near-edge datacenters configured to execute a core network function and a near-real-time RIC or a non-real-time RIC; and a central controller. The central controller is configured to: receive inputs of application requirements, hardware constraints, and a capacity of first and second computing resources at the far-edge datacenters and near-edge datacenters; enumerate a plurality of feasible combinations of application locations and configurations that satisfy the application requirements and hardware constraints; incrementally allocate a quant of the first or second computing resources to a feasible combination that would produce a greatest utility from the quant based on a utility function; and deploy each of the plurality of applications.
H04L 41/16 - Dispositions pour la maintenance, l’administration ou la gestion des réseaux de commutation de données, p. ex. des réseaux de commutation de paquets en utilisant l'apprentissage automatique ou l'intelligence artificielle
36.
TARGETING OPERATING SYSTEM PROFILES FOR BARE METAL RESTORE
Example solutions enhance security of bootable media images during bare metal restores. A boot image generation request and original image integrity data is received from a first computing device. An original image timestamp associated with the boot image generation request is stored. A message is received from a second computing device that includes current image integrity data generated by the second computing device using a current boot image. The original image integrity data is verified to match the current image integrity data. The message is determined to have been received within a length of time from the original image timestamp. A registration of the second computing device is performed within the device management system based on the verification and the determination.
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
Technologies described herein relate to a computer-implemented environment that includes multiple bots with which a user can interact. At least one generative model is employed to identify which bot of the multiple bots is to respond to a user communication set forth by the user in the computer-implemented environment.
The present disclosure proposes a method, apparatus and computer-readable medium for action decision based on self-tune mechanism. A set of previous states associated with a target application and a set of previous rewards corresponding to the set of previous states may be obtained. An action in natural language for the target application may be generated based on the set of previous states and the set of previous rewards. It may be verified with a set of predefined rules whether the action is reasonable. Action code in computer language corresponding to the action may be generated in response to verifying that the action is reasonable. The target application may be caused to execute the action code.
Techniques are described herein that are capable of performing AI-based conversion of a natural language prompt ("prompt") to a system-specific segment definition using entity reduction and renaming. The prompt requests data that satisfies a search criterion from a database that stores entities having entity names. Each entity name not satisfying a relevance criterion is changed based on content of the respective entity. An AI model is caused to determine a first subset of the entity names that is relevant to the prompt by providing a first AI prompt, the prompt, and the entity names as first inputs to the AI model. The AI model is caused to convert the prompt to the system-specific segment definition, which conforms to a particular format, by providing a second AI prompt, the prompt, information regarding the particular format, and the first subset of the entity names as second inputs to the AI model.
The disclosed concepts relate to leveraging a generative language model for interactive constraint solving. For instance, a generative language model can be prompted to generate a constraint data structure that represents a user preference expressed in natural language. The constraint data structure can be parsed to extract constraint parameters that can be programmatically solved by a constraint solver. The generative language model can also be prompted to generate constraint-checking code that can be invoked by the constraint solver.
G06F 17/11 - Opérations mathématiques complexes pour la résolution d'équations
G06Q 10/04 - Prévision ou optimisation spécialement adaptées à des fins administratives ou de gestion, p. ex. programmation linéaire ou "problème d’optimisation des stocks"
A method for verifying an application structured to execute on a client device. A challenge request is sent to the application. A candidate challenge answer is received from the application in response to the challenge request, which is then provided as input to a verification computation with a challenge input. Based on an output of the verification computation, it is determined that the candidate challenge answer is generated by providing the challenge input to a challenge computation. Based on the determination that the candidate challenge answer is generated by providing the challenge input to the challenge computation, the application is verified.
A data processing system implements receiving a first request to collaboratively author a mixed reality experience with a vision-language model planner, the mixed reality experience comprising an interactive guide for performing a task involving a complex multipart object; obtaining 3D object geometry information for the complex multipart object; obtaining a description of the task to be performed including a plurality of subtasks each associated with a user action to be performed on a respective part of the complex multipart object; constructing a prompt to the model, the prompt instructing the model to generate a task list based on the geometry information and the description of the task to be performed; providing the prompt as an input to the model to obtain the task list; and generating content for the mixed reality experience using the task list in response to a second request to execute the mixed reality experience.
The techniques disclosed herein enable systems to offload deep packet inspection tasks to a network interface card. This is accomplished by configuring the network interface card with a configuration file. The configuration file identifies target protocols, target fields, a number of packets to analyze for each target protocol, as well as identification tables that enable the network interface card to identify packet attributes. Once configured, the network interface card can receive and analyze incoming network packets. Accordingly, the network interface card extracts and parses values represented by the network packet in accordance with the parameters of the configuration file. The extracted values are compared against the entries of the identification table to derive an attribute identifier which can be returned to a network protocol stack. Moreover, the configuration file can also provide support for standard and non-standard network protocols.
Systems for dynamically synthesizing widgets for chart modification are provided. A method can include receiving data indicating a dataset of structured data. The data can be provided by a user through a user interface (UI). The UI can display the data on a chart. A request can be received by the UI. The request can be provided by the user. The request can indicate an alteration to a representation of the data on the chart. A widget can be dynamically synthesized based on the request. The widget can be operable to alter the representation of the data on the chart based on user interaction with the widget. The UI can present the widget on the UI alongside the chart. The chart can be altered based on user interaction with the widget.
A combined hyperparameter and proxy model tuning method is described. The method involves multiple search iterations. In each search iteration, candidate hyperparameters are considered. An initial ('seed') hyperparameter is determined, and used to train one or more first proxy models on a target dataset. From the first proxy model(s), one or more first synthetic datasets are sampled. A first evaluation model is fitted to each first synthetic dataset, for each candidate hyperparameter, enabling each candidate hyperparameter to be scored. Based on the respective scores assigned to the candidate hyperparameters, a candidate hyperparameter is selected and used to train one or more second proxy models on the target dataset.
Display devices, display panels, and methods for manufacture thereof described herein provide sub-pixel designs for display devices with under display cameras. In an aspect, a display panel comprising a display portion is provided. The display portion comprises a first sub-pixel and a second sub-pixel. The first and second sub-pixels include respective static corners at a same position relative to a respective sub-pixel center and respective dynamic corners. The position of each dynamic corner is located such that shapes of the first and second sub-pixels are different. In another aspect, a method for manufacturing a semiconductor component is provided. An organic material is deposited on first and second anodes utilizing a mask arranged over semiconductor material. The mask comprises first and second sub-pixel regions, each of which have a respective static corner and a respective dynamic corner such that shapes of the first and second sub-pixel regions are different.
G09G 3/20 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p. ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice
H10K 59/35 - Dispositifs spécialement adaptés à l'émission de lumière multicolore comprenant des sous-pixels rouge-vert-bleu [RVB]
H10K 59/65 - OLED intégrées avec des capteurs d'images inorganiques
47.
SYSTEM AND METHOD FOR SOFTWARE-BASED ENHANCEMENTS OF ARM64 PROCESSORS
A method, computer program product, and computing system for processing a first request to access data using an ARM64 processor. A first level cache data portion is defined by calculating a portion of the data to retrieve to a first level cache within a cache memory system. A second level cache data portion is defined by calculating a portion of data to retrieve to a second level cache within the cache memory system. The ARM64 processor is instructed to retrieve the second level cache data portion before retrieving the first level cache data portion. The ARM64 processor is instructed to retrieve the first level cache data portion.
G06F 12/084 - Systèmes de mémoire cache multi-utilisateurs, multiprocesseurs ou multitraitement avec mémoire cache partagée
G06F 12/0862 - Adressage d’un niveau de mémoire dans lequel l’accès aux données ou aux blocs de données désirés nécessite des moyens d’adressage associatif, p. ex. mémoires cache avec pré-lecture
G06F 12/0897 - Mémoires cache caractérisées par leur organisation ou leur structure avec plusieurs niveaux de hiérarchie de mémoire cache
G06F 12/126 - Commande de remplacement utilisant des algorithmes de remplacement avec maniement spécial des données, p. ex. priorité des données ou des instructions, erreurs de maniement ou repérage
G06F 12/128 - Commande de remplacement utilisant des algorithmes de remplacement adaptée aux systèmes de mémoires cache multidimensionnelles, p. ex. associatives d’ensemble, à plusieurs mémoires cache, multi-ensembles ou multi-niveaux
48.
NETWORK PROCESSING USING FIXED-FUNCTION LOGIC COMPONENTS CLOSE-COUPLED WITH PROGRAMMABLE LOGIC AND SOFTWARE
Implementations of architectures for network processing using a fixed-function logic per-op component close-coupled with programmable logic and software are provided. One aspect provides an integrated circuit device for network processing, the device comprising a composable processing pipeline that includes a programmable per-op component and a fixed-function logic per-op component that is close-coupled with programmable logic and software. The device further comprises a compute complex component comprising processing circuitry implementing the software for controlling the programmable per-op component and the fixed-function logic per-op component, wherein for a first processing pipeline, the processing circuitry is configured to perform a first function using the programmable per-op component, and for a second processing pipeline, the processing circuitry is configured to perform a second function using the fixed-function logic per-op component.
A method for detecting fraudulent SIM swapping on mobile devices is presented. When a server receives a call request, it obtains and compares the device's unique identifier to a stored one. If different, the call is held pending authentication. Authentication involves sending instructions via messaging service for the user to transmit a specific code from the device's native messaging app. Upon receiving the code, the device is authenticated. Alternatively, a client app initiates a synthetic call, handing it off to the native dialer. If handoff fails due to lack of mobile network connectivity, the device is suspected of being compromised, requiring additional authentication. This method enhances security by leveraging native dialer and messaging applications. The server thus provides a robust system to detect and prevent fraudulent SIM swapping attempts, ensuring user protection in mobile communications.
H04W 12/126 - Dispositions antivol, p. ex. protection contre le clonage de module d’identité d’abonné [SIM]
H04W 12/48 - Dispositions de sécurité utilisant des modules d’identité utilisant la liaison sécurisée, p. ex. liant de manière sécurisée les modules d'identité aux dispositifs, aux services ou aux applications
A device may obtain a provided haptic waveform. A device may convert the provided haptic waveform with a Fourier transform to create a converted haptic waveform. A device may identify at least one frequency peak of the converted haptic waveform. A device may drive an eccentric rotating mass (ERM) haptic device at least partially according to the at least one frequency peak.
Systems and methods for performing collective operations associated with artificial intelligence (AI) using a property of electromagnetic radiation are described. An example method for processing an artificial intelligence (AI) model includes a first set of neurons associated with a first layer of the AI model communicating via electromagnetic radiation: (1) reference signals for facilitating a collective operation associated with the AI model, and (2) input signals for use with a second layer of the AI model for performing the collective operation associated with the AI model. The method further includes a second set of neurons associated with the second layer receiving, as a result of processing of a property of the electromagnetic radiation, the reference signals, and the input signals for performing the collective operation associated with the AI model.
Methods and apparatuses for automating the retopologization of 3D meshes including the automated selection and adjustment of correspondence points are described. The automated selection of correspondence points may be performed to refine locations of correspondence points using a matching score that is computed based on surface normal similarity between surfaces corresponding with a candidate correspondence point on an input scan mesh and a point on a morphable model of 3D surfaces. The matching score may also take into account a distance between a candidate correspondence point on the input scan mesh and a corresponding point on the morphable model of 3D surfaces and similarities in surface features, such as similarities in surface curvature at the candidate correspondence point on the input scan mesh and the corresponding point on the morphable model of 3D surfaces.
G06V 10/75 - Organisation de procédés de l’appariement, p. ex. comparaisons simultanées ou séquentielles des caractéristiques d’images ou de vidéosApproches-approximative-fine, p. ex. approches multi-échellesAppariement de motifs d’image ou de vidéoMesures de proximité dans les espaces de caractéristiques utilisant l’analyse de contexteSélection des dictionnaires
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
Disclosed is a preference model that improves the alignment of large language model (LLM) responses. For a particular scenario, a set of principles are identified that, when adhered to by the LLM, improve the quality of LLM responses. In a long form question and answer scenario, better answers tend to be useful, relevant, grounded, thorough, and true, although more and different principles are similarly contemplated. In a scenario that generates a movie script, better responses may include a relatable protagonist, a character arc, and a satisfying denouement. The disclosed axiomatic preference model is trained to understand when a response does or does not adhere to these principles. Once trained, the preference model may be used as a drop-in replacement of existing preference models used to train an LLM.
A compiler creates a dependency graph for a function in an input program. The dependency graph includes nodes corresponding to commands in the function and edges that correspond to dependencies between the commands. The compiler performs a forward reachability analysis on the dependency graph to eliminate redundant dependencies. The compiler also adds a minimized set of back-edges to the dependency graph to enforce loop-carried resource dependencies in the input program. The compiler then allocates synchronization primitives provided by a multiprocessor computing system, such as semaphores, to the commands in the function of the input program based on the contents of the dependency graph.
A hardware component package (400) includes a temperature-sensitive hardware component (402) having a first surface (404A) and a second surface (404B) opposite from the first surface (404A). A substrate (406) is coupled with the temperature-sensitive hardware component (402) via a wire bond (408) connected to the first surface (404A) of the temperature-sensitive hardware component (402). A package enclosure (410) is coupled with the second surface (404B) of the temperature-sensitive hardware component (402). The package enclosure (410) includes a central portion (418) and a lateral portion (420) formed from separate pieces of material and separated by a thermal isolation layer (422). A thermoelectric cooler (TEC) (412) is disposed between the second surface (404B) of the temperature-sensitive hardware component (402) and the package enclosure (410), such that at least some heat generated by the temperature-sensitive hardware component (402) is dissipated to the package enclosure (410) via the TEC (412).
H01L 23/34 - Dispositions pour le refroidissement, le chauffage, la ventilation ou la compensation de la température
H01L 23/38 - Dispositifs de refroidissement utilisant l'effet Peltier
H04N 23/52 - Éléments optimisant le fonctionnement du capteur d'images, p. ex. pour la protection contre les interférences électromagnétiques [EMI] ou la commande de la température par des éléments de transfert de chaleur ou de refroidissement
H01L 23/36 - Emploi de matériaux spécifiés ou mise en forme, en vue de faciliter le refroidissement ou le chauffage, p. ex. dissipateurs de chaleur
H04N 23/54 - Montage de tubes analyseurs, de capteurs d'images électroniques, de bobines de déviation ou de focalisation
56.
CROSSTALK REDUCTION USING TRACE COUPLING IN INTEGRATED CIRCUIT COMPONENTS
Aspects of the embodiments disclosed herein include electrical systems that reduce crosstalk, such as far-end crosstalk (FEXT) between two or more signal paths of a memory system by electromagnetically coupling the two or more signal paths each comprising a respective trace. Electromagnetically coupling the two or more respective traces includes positioning at least two traces in close proximity with each other such that a mutual-to-self-inductance ratio (Lm/L) between the at least two signal paths matches or substantially matches the mutual-to-self-capacitance ratio (Cm/C) of the at least two signal paths. Certain embodiments of this disclosure are directed to a passive manner of reducing FEXT between any number of signal paths without adding traces with the sole purpose of reducing crosstalk, thereby reducing or maintaining a signal density between designs of a component of an IC package.
The present disclosure provides methods, apparatuses, and non-transitory computer-readable media for deploying a target cloud service platform. Configuration information for a target cloud service platform may be obtained. A bootstrap infrastructure may be built in a public cloud service platform based on the configuration information, the bootstrap infrastructure provides a collection of kernel services associated with the target cloud service platform. A collection of target services may be created in the target cloud service platform based on the collection of kernel services.
A computerized system and method for creating synthetic controls in survival analysis is provided. A target group of patients who are administered a drug and a control group of patients who are not administered the drug are created from real data of patients. A weight is applied to a common feature of each patient in the control group of patients so that a linear combination of the common feature of the patients in the control group of patients becomes similar to a particular patient in the target group of patients. A synthetic patient is created for each patient in the control group of patients. Because the common feature of the synthetic patient is similar to the particular patient in the target group, an efficacy of the drug may be determined by comparing the target group of patients with the synthetic patient.
G16H 10/20 - TIC spécialement adaptées au maniement ou au traitement des données médicales ou de soins de santé relatives aux patients pour des essais ou des questionnaires cliniques électroniques
G16H 20/10 - TIC spécialement adaptées aux thérapies ou aux plans d’amélioration de la santé, p. ex. pour manier les prescriptions, orienter la thérapie ou surveiller l’observance par les patients concernant des médicaments ou des médications, p. ex. pour s’assurer de l’administration correcte aux patients
G16H 50/70 - TIC spécialement adaptées au diagnostic médical, à la simulation médicale ou à l’extraction de données médicalesTIC spécialement adaptées à la détection, au suivi ou à la modélisation d’épidémies ou de pandémies pour extraire des données médicales, p. ex. pour analyser les cas antérieurs d’autres patients
G16H 50/50 - TIC spécialement adaptées au diagnostic médical, à la simulation médicale ou à l’extraction de données médicalesTIC spécialement adaptées à la détection, au suivi ou à la modélisation d’épidémies ou de pandémies pour la simulation ou la modélisation des troubles médicaux
G16H 50/20 - TIC spécialement adaptées au diagnostic médical, à la simulation médicale ou à l’extraction de données médicalesTIC spécialement adaptées à la détection, au suivi ou à la modélisation d’épidémies ou de pandémies pour le diagnostic assisté par ordinateur, p. ex. basé sur des systèmes experts médicaux
G16H 15/00 - TIC spécialement adaptées aux rapports médicaux, p. ex. leur création ou leur transmission
59.
ORCHESTRATOR WITH SEMANTIC-BASED REQUEST ROUTING FOR USE IN RESPONSE GENERATION USING A TRAINED GENERATIVE LANGUAGE MODEL
A system (10) is provided for managing specialized tasks and information retrieval processes. Agents (28) are configured to perform tasks and/or retrieve information (26) in a specialized domain. The system (10) receives, via an interaction interface (38), a message (34) from a user for the trained generative model (50) to generate an output, generates a context (46) of the message (34), generates a request (54) including the context (46) and the message (34), executes an orchestrator (58) configured to: receive the request (54), determine, using semantic decision making, one or more agents (28) to handle the request (54), input the request (54) into one or more agents (28) to perform a task and/or retrieve information (26) in specialized domains, generate a prompt (44) based on the retrieved information (26) and/or the performed task and the message (34) from the user, provide the prompt (44) to the trained generative model (50), receive, in response to the prompt (44), a response (52) from the trained generative model (50), and output the response (52) to the user.
A computer-implemented method is provided that prevents prompt injection attacks against generative models. Input data is received, and a first prompt section is generated from the input data. First, second and third instructions are received, which respectively instruct the generative model to carry out a task based on the first prompt section, inform the generative model of a boundary of the first prompt section, and instruct the generative model to ignore any instructions in present in the first prompt section. A prompt for the generative model is generated from the first prompt section and the first instructions, second instructions and third instructions.
A system may include a grid connection configured to receive grid electrical power at a grid voltage, a grid amperage, and a grid frequency. A system may include a co-location including a plurality of computing devices. A system may include a solid-state transformer in electrical communication with the grid connection and configured to convert the grid electrical power to co-location electrical power having a co-location voltage different from the grid voltage and a co-location amperage different from the grid amperage. A system may include a superconducting cable providing electrical communication of the co-location electrical power from the solid-state transformer to the co-location.
G06F 1/18 - Installation ou distribution d'énergie
H02J 3/00 - Circuits pour réseaux principaux ou de distribution, à courant alternatif
H02J 3/12 - Circuits pour réseaux principaux ou de distribution, à courant alternatif pour règler la tension dans des réseaux à courant alternatif par changement d'une caractéristique de la charge du réseau
H05K 7/14 - Montage de la structure de support dans l'enveloppe, sur cadre ou sur bâti
H02J 9/06 - Circuits pour alimentation de puissance de secours ou de réserve, p. ex. pour éclairage de secours dans lesquels le système de distribution est déconnecté de la source normale et connecté à une source de réserve avec commutation automatique
The present disclosure proposes a method, apparatus and computer program product for content recommendation for a target application. A topic description corresponding to a topic associated with an information set may be generated, the information set being one of a plurality of information sets of the target application. An information set embedding of the information set may be generated based on the topic description. A set of historical items of a target user may be obtained. A user embedding of the target user may be generated based on the set of historical items. A similarity score between the information set embedding and the user embedding may be calculated. The information set may be loaded in a user interface of the target application based at least on the similarity score.
To compensate for the non-linearity of a phase interpolation (PI) circuit, the interpolation clocks of two PI circuits receiving different interpolation codes may be summed. However, even if the non-linearities of the interpolated clocks have opposite polarities, they may have different magnitudes causing some non-linearity. A weighted summing PI that sums interpolated clocks of two PI circuits includes a weighted summing circuit that employs a weight signal to generate a weighted summed interpolated clock having an interpolated phase, based on the weight signal, between the phases of the interpolated clocks. As a result, the phase of the weighted summed interpolated clock may be more influenced by the phase of one of the interpolated clocks from the two PI circuits than the other. A weight calibration circuit may be included to select a balanced weight signal to reduce non-linearity in the weighted summing PI.
H03K 5/135 - Dispositions ayant une sortie unique et transformant les signaux d'entrée en impulsions délivrées à des intervalles de temps désirés par l'utilisation de signaux de référence de temps, p. ex. des signaux d'horloge
Examples of the present disclosure describe an interlock mechanism for optical fibers. A protective enclosure for selectively covering a manual release mechanism of an optical fiber connector is described. In some examples, an optical system includes the protective enclosure, a mechanical actuator, and an optical device, among other components. A user interacts with the mechanical actuator to move the protective enclosure to a covered position or an uncovered position, disallowing or allowing, respectively, physical access to the manual release mechanism. The user's interaction with the mechanical actuator also concurrently turns the optical device on, off, or to a different power level, which provides, stops, or reduces the power of, respectively, optical signals provided to the optical fibers via the optical connector.
Display devices, display systems, backlight assemblies, and methods described herein provide compound backlights combining edge and direct lighting. In an aspect, the backlight (650A) includes a waveguide layer (608), first light sources (616A), and an array layer (632). The first light sources (616A) are arranged along an edge (638) of the waveguide layer (608). Each of the first light sources (616A) transmits light (620A, 620B) into the waveguide layer (608). The array layer (632) is coupled to a lower first surface (634) of the waveguide layer (608) and comprises a reflective layer (614) and second light sources (618A, 618B, 618C). The reflective layer (614) reflects light (620B) transmitted by the first light sources (616A) into the waveguide layer (608). The second light sources (618A, 618B, 618C) are arranged between the waveguide layer (608) and the reflective layer (614). Each of the second light sources (618A, 618B, 618C) transmits light (624) into the waveguide layer (608) through the first surface (634). In a further aspect, the second light sources are oriented away from the waveguide layer and toward the reflective surface. In another aspect, a compound backlight includes a reflective surface, arranged between a waveguide layer and second light sources, that reflects a portion of received light.
Systems and methods are disclosed herein for providing fair allocation of resources in a multi-tenant environment. Systems and methods are configured for identifying a plurality of tenants participating in the multi-tenant environment. For each tenant of the plurality of tenants, systems determine a tenant status as a donating tenant, a fairly borrowing tenant, or an unfairly borrowing tenant and apply a different borrowing algorithm to each tenant of the plurality of tenants based on a corresponding tenant status determined for each tenant. Different borrowing algorithms are configured to determine different resource borrowing limits from a common pool of resources for each tenant.
Systems and methods for transforming a captured content item are provided. In particular, a computing device may receive a capture request to capture a content item, in response to the capture request, capture the content item and provide the content item in a first user interface element of a content management tool, apply a generative transformation function to the content item to generate a transformed content item, write the transformed content item in a second user interface element of the content management tool, receive a paste request to paste the transformed content item at a requested location, and in response to the paste request, provide the transformed content item at the requested location.
According to implementations of the present disclosure, a solution for molecular modeling is provided. According to the solution, an interatomic position representation of a molecule is determined based on respective positions of a plurality of atoms in the molecule, the interatomic position representation characterizing relative spatial positions between individual pairs of atoms in the plurality of atoms; a feature representation of the molecule is determined based on an atomic attribute representation of the molecule and the interatomic position representation, the atomic attribute representation characterizing respective attributes of the plurality of atoms; and a prediction of a target property for the molecule is determined based on the feature representation. Thus, relative spatial positions between atoms are considered in molecular modeling to introduce more abundant information in modeling. In this way, the accuracy of predicting molecular properties can be improved.
A prompt to a large language model for a source code translation of an input source code snippet written in a source programming language into a different target programming language is augmented with compiled code translations. The compiled code translations are identified from a ranked list of the top-k code candidates in the target programming language having a close similarity to an embedding of the input source code snippet. A reward model generates a reward score for each of the top-k code candidates which is used to identify the top-n code candidates having less compilation errors. The top-n code candidates are included in the prompt to the large language model to guide the model on the translation task.
Examples of the present disclosure describe systems and methods for. In some examples, a software agent collects data from a node, such as logs or monitoring information, and provides the data to a controller. The controller assesses the attestation state and the configuration drift of the node. In some examples, the controller applies a taint to the node, which may indicate a condition or constraint on the node. A scheduler manages the workloads on the node based on the attestation state, the configuration drift, and in some examples, the taint of the node. The scheduler decides whether to schedule a workload to the node, evict a workload from the node, or keep a workload on the node depending on the attestation state and configuration drift of the node, for example, whether the workload has a toleration for the taint of the node.
G06F 21/55 - Détection d’intrusion locale ou mise en œuvre de contre-mesures
G06F 21/57 - Certification ou préservation de plates-formes informatiques fiables, p. ex. démarrages ou arrêts sécurisés, suivis de version, contrôles de logiciel système, mises à jour sécurisées ou évaluation de vulnérabilité
71.
ATTACK PATH DISCOVERY ENGINE IN A SECURITY MANAGEMENT SYSTEM
Methods, systems, and computer storage media for providing attack path discovery management using an attack path discovery engine of a security management system. Attack path discovery management supports automatic attack path discovery that involves identifying and mapping potential pathways that attackers could use to infiltrate computing environments. In operation, an attack path discovery computation model comprising an entry point element, an advancement step element, and a target element, is accessed. A computing environment graph comprising computing components of a computing environment is accessed. Based on the entry point element, an entry point is identified in the computing graph; based on the advancement step element, an advancement step is identified in the computing environment graph; and based on the target element, a target is identified in the computing environment graph. An attack path is generated based on the entry point, the advancement steps, and the target. The attack path is communicated.
G06F 21/57 - Certification ou préservation de plates-formes informatiques fiables, p. ex. démarrages ou arrêts sécurisés, suivis de version, contrôles de logiciel système, mises à jour sécurisées ou évaluation de vulnérabilité
72.
SYSTEM AND METHOD FOR PROACTIVELY REDUCING HALLUCINATIONS IN GENERATIVE ARTIFICIAL INTELLIGENCE (AI) MODEL RESPONSES
A method, computer program product, and computing system for processing a prompt for a target generative AI model and a corresponding response generated by the target generative AI model for the prompt. The prompt and the corresponding response from the generative AI model are compared to a plurality of predefined verified prompt-response pairs. In response to determining at least a threshold similarity between the prompt and the corresponding response and a predefined verified prompt-response pair, the corresponding response from the target generative AI model is provided to a source of the prompt.
Relational database systems are disclosed that are enabled to operate with versioned metadata. The relational database system includes a lock manager, a transaction manager and a version aware metadata storage and cache configured to store to store and manage versions of metadata, to determine which of such versions should be visible at any given point in time, and to enable creation of the proper versions of metadata. In an aspect, the transaction manager manages transaction identifiers and their associated start times, abort times and/or commit times. Such data enables determination of transaction visibility, and consequently the metadata version visibility, for any point in time. In an aspect, such metadata versioning support enables snapshot isolation of metadata transactions.
Described herein are technologies related to analyzing behavioral data of an entity in a cloud computing environment and determining suitability of providing the behavioral data to a computer-executable model that is configured to identify anomalous behavior of the entity. The technologies described herein improve performance of computer-executable models that are configured to detect anomalous behavior in a cloud computing environment.
A disclosed method facilitates translation of natural language queries into query language statements usable to retrieve data from or write data to a particular database. The method includes obtaining a pool of shots. Each shot in the pool includes a natural language query component and a corresponding database translation component. The method further provides for vectorizing the natural language query component for each of the shots into a common vector space; receiving a natural language query from a user interface; vectorizing the natural language query within the common vector space; identifying a subset of vectorized natural language query components that satisfy a similarity metric when compared to the vectorized natural language query; and generating an LLM prompt that includes shots from the pool corresponding to the subset of the vectorized natural language query.
Multi-layer anomaly detection identifies and issues alerts to focus limited resources on the most concerning activities detected in voluminous data. A multi-layer anomaly detector includes a clusterer, a forecaster, and a statistics generator to respectively generate a first, second, and third anomaly lists for input data indicating voluminous events relative to one or more domains, such as access to computing devices; access to real estate; or financial transactions. An ensemble detector may generate an ensemble anomaly list indicating a subset of the voluminous events based on the first, second, and third anomaly lists. The lists may indicate anomaly (e.g., security risk) scores. The ensemble anomaly list may combine the anomaly (e.g., risk) scores in the first, second, and third anomaly lists. An identifier may generate an alert for the subset of events, e.g., with relative security risk scores for the relevant individuals, entities, and/or events.
The technology described herein provides an improved training framework for a diffusion model used for a super resolution (SR) task. In particular, the technology provides diffusion rectification to correct a training-sampling discrepancy inherent in current training methods. The technology also provides estimation-adaptation. The diffusion rectification portion of the technology uses an estimated HR image, rather than a ground truth HR image as the seed to the forward process. This improves model performance issues caused by a training-sampling discrepancy. The training-sampling discrepancy occurs because the training and sampling processes do not use the same data. The estimation adaption strategy injects ground truth to the plurality of noisy images to reduce the training-estimation error in the images. In an aspect, a different amount of ground truth is injected into training images based on the training image's location in the Markov chain.
G06T 3/4053 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement basé sur la super-résolution, c.-à-d. où la résolution de l’image obtenue est plus élevée que la résolution du capteur
78.
REMOTE DIRECT MEMORY ACCESS DATA REPLICATION MODEL
A computer-implemented method for replicating a log to a remote computer system is disclosed. The method involves identifying a log comprising a data portion and a metadata portion for replication. The data portion is sent to the remote computer system using a Remote Direct Memory Access (RDMA) write operation, while the metadata portion is sent using a first RDMA send operation after the data portion has been sent. The method further includes identifying a second RDMA send operation received from the remote computer system, which indicates the completion of the first RDMA send operation. Based on identifying the second RDMA send operation, the method determines the completion of log replication to the remote computer system. This method enables efficient and reliable replication of logs in a computer system.
G06F 15/173 - Communication entre processeurs utilisant un réseau d'interconnexion, p. ex. matriciel, de réarrangement, pyramidal, en étoile ou ramifié
A method for generating storyboards is described. An extraction prompt is provided to a first generative neural network model. The extraction prompt is a text-based prompt that instructs the first generative neural network model how to identify timestamps of segments having related content within transcripts according to dialog within the transcripts. A transcript of a meeting is provided as an input to the first generative neural network model. Segment timestamps for identified segments within the meeting are received from the first generative neural network model based on the extraction prompt and the transcript. Segment images for the identified segments are generated using a second generative neural network model, wherein each of the segment images represents segment content within a corresponding identified segment.
Systems and methods for generating autocomplete text using a language model are disclosed. An image and text-prefix may be entered at an input field of a search application. The image is processed to generate an image description. The image description and the text-prefix signals may be used as input at a language model to generate an autocomplete text by the language model. A contextual history may also be included as input to the language model. The autocomplete text is an output by the language model based on the input at the language model. The auto-complete text may be a next-word ghosting.
This disclosure relates to utilizing a threat detection system to detect anomalous actions provided by a compromised large generative language model (LLM). For instance, the threat detection system utilizes a detection-based large generative model to process select communication between an application system and the LLM and determine when the LLM may have been potentially compromised. In various implementations, utilizing the detection-based large generative model, the threat detection system determines when an LLM is improperly instructing an application system to invoke tools to perform unapproved actions. Furthermore, when an LLM becomes compromised, the threat detection system intelligently safeguards the detection-based large generative model against similar threats that seek to evade detection or compromise the detection-based large generative model.
A computer system and method are disclosed for replicating logs in a distributed environment. The method includes identifying a write input/output (I/O) operation and identifying a log to be replicated based on the write I/O operation. The log is then persisted to local non-volatile memory in the computer system. Subsequently, the log is replicated to multiple remote hosts, where each remote host of the plurality of remote hosts stores the log in its corresponding local non-volatile memory without de-staging the log to a backing store. The write I/O operation is committed once the log is replicated to at least a subset of the remote hosts that forms a quorum. Finally, the log is de-staged to the backing store after the write I/O operation is successfully committed.
G06F 11/34 - Enregistrement ou évaluation statistique de l'activité du calculateur, p. ex. des interruptions ou des opérations d'entrée–sortie
G06F 17/40 - Acquisition et consignation de données
G06F 9/46 - Dispositions pour la multiprogrammation
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
83.
FRACTIONAL PROCESSING CAPACITY ALLOCATION BASED ON ESTIMATED LOAD
A method (1100) for artificial intelligence (AI) inferencing workload allocation includes, at a computing device (124) of a distributed AI inferencing platform (108), receiving (1102) an estimated prompt load (132) and an estimated generation load (134) of an AI inferencing workload (116) to be fulfilled by a processing unit (104) of a computing node (100) of the distributed AI inferencing platform (108). Based at least in part on the estimated prompt load (132) and the estimated generation load (134), an inference unit (IU) processing load (136) is estimated, the IU processing load (136) to be applied to the processing unit (104) while fulfilling the AI inferencing workload (116). Fractional processing capacity (138A) of the processing unit (104) is allocated for fulfilling the AI inferencing workload (116) based at least in part on the IU processing load (136).
G06F 9/50 - Allocation de ressources, p. ex. de l'unité centrale de traitement [UCT]
G06N 3/063 - Réalisation physique, c.-à-d. mise en œuvre matérielle de réseaux neuronaux, de neurones ou de parties de neurone utilisant des moyens électroniques
Examples are disclosed that relate to a generative model for generating inorganic material candidates, such as crystalline structures. One example provides a method (700), comprising training (702) an unconditional generative model using a dataset of stable periodic material structures, the unconditional generative model comprising a diffusion model. The training comprises learning the diffusion model to iteratively noise the stable periodic material structures of the dataset towards a random periodic structure by noising (702) atom types of atoms in the periodic material structure, noising (702) fractional coordinates of the atoms in the periodic material structure, and noising (702) a lattice of the periodic material structure. The method further comprises using (720) the trained unconditional generative model to generate a material structure by iteratively denoising an initial structure sampled from a random distribution.
Examples described in this disclosure relate to sub-kelvin control systems and methods for scalable quantum control. An example system includes a first cooling sub-system operable to maintain an operating temperature for a first device within a first sub-kelvin temperature range. The system further includes a second cooling sub-system, separate from the first cooling sub-system, operable to maintain an operating temperature for a second device, different from the first device, within a second sub-kelvin temperature range. The first sub-kelvin range may comprise a range between 50 milli-kelvin (mK) to 999 mK and the second sub-kelvin range may comprise a range between 1 mK to 299 mK. The combination of the first cooling sub-system and the second cooling sub-system is configured to maintain a temperature gradient between the first device and the second device despite the first device and the second device being in close proximity to each other.
86.
DETECTION OF QUASIPARTICLE POISONING AT MAJORANA ISLAND
A computing system including a quantum computing device. The quantum computing device includes a Majorana island at which Majorana zero modes (MZMs) are instantiated. The quantum computing device further includes a quantum dot electrically connectable to an MZM, a capacitance sensor capacitively coupled to the quantum dot, and a controller. The controller is configured to set a Majorana island gate voltage of the Majorana island and a quantum dot gate voltage of the quantum dot to a candidate resonance Majorana island voltage and a candidate resonance quantum dot voltage. The controller is further configured to receive a capacitance measurement of the quantum dot and the Majorana island and determine whether resonance occurs based on the capacitance measurement. The controller is further configured to reset the gate voltages. The controller is further configured to output a quasiparticle poisoning value indicated by the one or more determinations of whether resonance occurs.
87.
GENERATIVE AI-DRIVEN MULTI-SOURCE DATA QUERY SYSTEM
Embodiments of the disclosed technologies include, in response to receiving a query, matching the query to metadata from a plurality of heterogeneous data sources, and selecting one or more data sources from the plurality of heterogeneous data sources for answering the query, by sending the query and embeddings of the matched metadata to a generative artificial intelligence (GAI), and prompting the GAI to select matching data sources. Based on the data from the GAI, generating one or more custom queries targeted to the matching data sources selected by the GAI, the custom queries formatted to be sent to the selected data sources, executing the one or more custom queries across the selected data sources, and summarizing results from the executing and providing a response to the query.
Techniques are described herein in which a programmable logic device (PLD) is integrated into a baseboard management controller (BMC). A programming-enhanced BMC is powered on by a PLD that is integrated into the programming-enhanced BMC and that is coupled to an internal bus of the programming-enhanced BMC. A configuration file is provided from immutable BMC hardware in the BMC to the PLD based at least on the programming-enhanced BMC being powered on. The configuration file specifies a configuration to be programmatically applied to programmable hardware of the PLD. The programmable hardware of the PLD is programmed by loading the configuration file, which causes the programmable hardware to render a peripheral interface that is defined by the configuration file natively on the internal bus of the programming-enhanced BMC.
G06F 30/34 - Conception de circuits pour circuits reconfigurables, p. ex. réseaux de portes programmables [FPGA] ou circuits logiques programmables [PLD]
89.
DETERMINING LOGICAL STABILIZER INSTRUMENT FOR STABILIZER CIRCUIT
A computing system is provided, including a processor configured to receive a standardized stabilizer instrument specification including an input Clifford unitary, an output Clifford unitary, and a plurality of stabilizer instrument bit matrices. The processor is further configured to receive a logical instrument input error correction code and a logical instrument output error correction code. The processor is further configured to compute a logical instrument specification based at least in part on the standardized stabilizer instrument specification, the logical instrument input error correction code, and the logical instrument output error correction code. The logical instrument specification includes a logical input Clifford unitary, a logical output Clifford unitary, a plurality of logical instrument bit matrices, and a logical instrument relabeling matrix. The processor is further configured to store the logical instrument specification in memory.
90.
ADAPTIVE VIDEO COMPRESSION USING GENERATIVE MACHINE LEARNING
Various embodiments of the technology described herein relate to compression of video data, including selecting a pivot image from a video including a plurality of images and causing a first machine learning model to generate a descriptor of the pivot image, where the descriptor includes a language description associated with the pivot image. In one example, the pivot image and the descriptor are provided to a decoder for reconstruction of the video. In an embodiment, the decoder includes a generative machine learning model that takes as an input the pivot image and the descriptor. The decoder uses the pivot image to generate an image based at least in part on the descriptor. The image is combined with other images generated by the generative machine learning model to reconstruct the video.
H04N 19/463 - Inclusion d’information supplémentaire dans le signal vidéo pendant le processus de compression par compression des paramètres d’encodage avant la transmission
H04N 19/503 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif mettant en œuvre la prédiction temporelle
H04N 21/266 - Gestion de canal ou de contenu, p. ex. génération et gestion de clés et de messages de titres d'accès dans un système d'accès conditionnel, fusion d'un canal de monodiffusion de VOD dans un canal multidiffusion
In an example embodiment, a generator model such as a large language model (LLM) is leveraged to generate embeddings for both pieces of content and users. The embeddings map the pieces of content and the users into the same latent n-dimensional space. The embeddings are then fine-tuned using a two-tower deep neural network, with one of the towers representing users and the other tower representing content. The two-tower deep neural network is trained to optimize the embeddings over some shared goal, such as user engagement with content, and uses information such as user interactions with content in that process. A clustering technique, such as K-nearest neighbor (kNN) can then be used to identify a grouping of top user/content pairs based on similarity between users and content, as reflected in the embeddings. For a given piece of content, therefore, the top users from that cluster can then be recommended as an audience for the content.
G06Q 30/0242 - Détermination de l’efficacité des publicités
G06Q 50/00 - Technologies de l’information et de la communication [TIC] spécialement adaptées à la mise en œuvre des procédés d’affaires d’un secteur particulier d’activité économique, p. ex. aux services d’utilité publique ou au tourisme
92.
GENERATING INFORMED PRIORS FOR HYPERPARAMETER SELECTION
A system iteratively evaluates the target machine learning model using evaluation hyperparameter values of the target machine learning model to measure performance of the target machine learning model for different combinations of the evaluation hyperparameter values. The system trains a surrogate machine learning model using the different combinations of the evaluation hyperparameter values as features and the performance of the target machine learning model based on a corresponding combination of the evaluation hyperparameter values as labels. The system generates a feature importance vector of the surrogate machine learning model based on the training of the surrogate machine learning model, generate informed priors based on the feature importance vector, and generates the target hyperparameter values of the target machine learning model based on the informed priors.
Embodiments of the present disclosure include countermeasure circuit techniques for cyberattacks. In one embodiment, portions of combinational logic receive shared input bit groups and produce shared output bit groups. Shared output bit groups may be coupled between series configured combinational logic portions using control gates. Clock signals are delayed to activate the control gates after the outputs are stable. In some embodiments, a first combinational logic group and second combinational logic group operate on a clock and inverse clock.
G06F 21/75 - Protection de composants spécifiques internes ou périphériques, où la protection d'un composant mène à la protection de tout le calculateur pour assurer la sécurité du calcul ou du traitement de l’information par inhibition de l’analyse de circuit ou du fonctionnement, p. ex. pour empêcher l'ingénierie inverse
This document relates to communication by backscattering of satellite signals. One example includes a satellite backscatter transmitter having a first antenna configured to receive a radio frequency satellite signal, a modulator configured to modulate the radio frequency satellite signal to obtain a modulated radio frequency satellite signal, a digital logic circuit configured to selectively control the modulator to encode information according to a communication scheme, and a second antenna configured to passively retransmit the modulated radio frequency satellite signal to a receiver.
95.
SECURITY ENHANCEMENT FOR COMPUTING DEVICE STATE CHANGE
Systems and methods are disclosed herein for identifying a bypass of a computing device state change. In an example system, a determination is made that a computing component, such as an application executing on the computing device, is blocking a state change of the computing device. The state change includes various types of actions to protect the computing device, such as an automatic lock, logoff, standby mode change, or powering off change. An idle period of the computing device is detected. A proximity change of a user relative to the computing device is also detected. Based on the idle period and the proximity change, an action to remediate the blocking of the state change is performed, such as generating a notification associated with the blocking of the state change for providing to the user and/or automatically bypassing the blocking of the state change.
The techniques disclosed herein enable an autonomous agent to interpret an input dataset and orchestrate a suite of software modules to perform a computational task on a representation of a chemical material. The input dataset includes a prompt defining a computational task to be performed on a chemical material. Moreover, the input dataset includes data defining a chemical included in the chemical material, molecular descriptors describing the chemical and/or the chemical material, and an external variable. The agent analyzes the benefits and drawbacks of each model within the context of the computational task to determine a technique for performing the computational task. Accordingly, the agent formulates a chain of calls invoking the functionality of data processing tools and models to perform the computational task responsive to the prompt.
G16C 20/30 - Prévision des propriétés des composés, des compositions ou des mélanges chimiques
G16C 20/70 - Apprentissage automatique, exploration de données ou chimiométrie
G16C 60/00 - Science informatique des matériaux, c.-à-d. TIC spécialement adaptées à la recherche des propriétés physiques ou chimiques de matériaux ou de phénomènes associés à leur conception, synthèse, traitement, caractérisation ou utilisation
G06N 3/00 - Agencements informatiques fondés sur des modèles biologiques
Techniques for implementing an AI threat modeling tool are disclosed. A static analysis tool is used to extract a candidate code snippet from a code repository. The candidate code snippet is identified as potentially being a security relevant code element. The static analysis tool generates additional context associated with the candidate code snippet. An LLM prompt is generated. This prompt is structured to include the candidate code snippet, the context, and a directive to assign a classification to the candidate code snippet. The classification includes a source classification, a sink classification, a sanitizer classification, or a flow step classification. The LLM operates on the prompt to generate output comprising a specific classification for the candidate code snippet. The output is formatted into a data extension file that is consumable by the static analysis tool.
G06F 21/56 - Détection ou gestion de programmes malveillants, p. ex. dispositions anti-virus
G06F 21/57 - Certification ou préservation de plates-formes informatiques fiables, p. ex. démarrages ou arrêts sécurisés, suivis de version, contrôles de logiciel système, mises à jour sécurisées ou évaluation de vulnérabilité
Example solutions perform natural language query processing on hybrid utterances. A precise segment is identified, within the hybrid utterance, and processed with a symbolic AI interpreter configured to generate a first interpretation. The precise segment is replaced, within the hybrid utterance, with a placeholder term thereby resulting in a vague utterance. The vague utterance is processed with a statistical AI interpreter configured to generate a second interpretation. The first interpretation is merged with the second interpretation using the hybrid utterance as a template for the merger and using the placeholder term as the location for the first interpretation within the second interpretation. A complete interpretation is generated and transmitted to a query generator.
Detection of malicious direct memory access (DMA) device used for direct device assignment. A virtualization computer system assigns a peripheral device to an operating context within a virtualization environment. The peripheral device is DMA capable. The virtualization computer system monitors a signal source that is affected by DMA operations initiated by the peripheral device while the peripheral device is assigned to the operating context. Based on monitoring the signal source, the virtualization computer system identifies a signal pattern characterizing the DMA operations that are initiated by the peripheral device. Using the signal pattern, the virtualization computer system determines that the DMA operations initiated by the peripheral device are abnormal and the virtualization computer system identifies the peripheral device as malicious.
G06F 21/53 - Contrôle des utilisateurs, des programmes ou des dispositifs de préservation de l’intégrité des plates-formes, p. ex. des processeurs, des micrologiciels ou des systèmes d’exploitation au stade de l’exécution du programme, p. ex. intégrité de la pile, débordement de tampon ou prévention d'effacement involontaire de données par exécution dans un environnement restreint, p. ex. "boîte à sable" ou machine virtuelle sécurisée
G06F 21/56 - Détection ou gestion de programmes malveillants, p. ex. dispositions anti-virus
G06F 21/57 - Certification ou préservation de plates-formes informatiques fiables, p. ex. démarrages ou arrêts sécurisés, suivis de version, contrôles de logiciel système, mises à jour sécurisées ou évaluation de vulnérabilité
G06F 21/85 - Protection des dispositifs de saisie, d’affichage de données ou d’interconnexion dispositifs d’interconnexion, p. ex. les dispositifs connectés à un bus ou les dispositifs en ligne
100.
METHODS AND SYSTEMS FOR ENHANCING MULTIMODAL CAPABILITIES IN LARGE LANGUAGE MODELS
Systems and methods are provided for enhancing the speech modality in a large language model (LLM) and for retaining in-context learning capabilities without overfitting to trained tasks. Systems obtain a first set of training data comprising tuples of a sample of speech combined with synthetically generated pairings of speech comprehension test questions and answers that correspond to the sample of speech and obtain a second set of training data comprising pairings of automatic speech recognition data. Systems generate and align a first set of encodings of the first set of training data and a second set of encodings of the second set of training data. Systems train the LLM on a greater amount of the first set of training data than the second set of training data and use the trained LLM to perform a natural language processing task.
G10L 15/06 - Création de gabarits de référenceEntraînement des systèmes de reconnaissance de la parole, p. ex. adaptation aux caractéristiques de la voix du locuteur
G10L 15/16 - Classement ou recherche de la parole utilisant des réseaux neuronaux artificiels
G10L 15/183 - Classement ou recherche de la parole utilisant une modélisation du langage naturel selon les contextes, p. ex. modèles de langage
G10L 15/26 - Systèmes de synthèse de texte à partir de la parole