A data processing system implements extracting symbolic property information from a training dataset by analyzing the training dataset with a symbolic property mining pipeline to extract properties of program code from one-shot program code examples, the symbolic property information indicative of types of properties of the one-shot program code examples determined to improve program code output by a large language model (LLM) in response to natural language utterances; and training a property recognition model to recognize symbolic properties associated with a natural language utterance using the training dataset and the symbolic property information, the property recognition model being configured to analyze the natural language utterance and to output the symbolic properties of program code.
Examples are disclosed that relate to power supply devices and methods for adjusting a length of a device cable extending from the power supply device. In one example a method comprises, at a power supply device comprising a device cable, determining a cable property of the device cable. Based at least in part on the cable property of the device cable, actuating a motor in the power supply device to change the length of the device cable extending from the power supply device.
Examples of the present disclosure describe systems and methods for preventing illicit data transfer and storage. In aspects, a computing platform may receive a data request from a caller system, device, or service. The computing platform may identify data items/properties associated with the data request and retrieve one or more rules relevant to the caller and/or caller location. The retrieved rule(s) may be used to evaluate the data item(s) such that data items, data item content, and/or data item properties that are prohibited by the retrieved rule(s) from being manipulated (e.g., accessed, transferred, stored) are removed from the identified data item(s). Based on the evaluation of the identified data item(s), one or more relevant status codes may be set. The computing platform may then manipulate the identified data item(s) in accordance with the data request and provide a processing response to the caller.
Systems, methods, devices, and computer readable storage media described herein provide techniques for generating recommendations utilizing a heterogeneous distance function. In an aspect, a measure of relevancy between a first data item and a second data item is received. A setting of an adjustable parameter of a parameterized heterogeneous distance function is determined based on the measure of relevancy. The parameterized heterogeneous distance function comprises first and second sub-functions. The first sub-function calculates a distance between data items based on features of a first data type and the second sub-function calculates a distance between data items based on features of a second, different, data type. A recommendation system is caused to utilize the parameterized heterogeneous distance function to generate a recommendation based on received input. In a further aspect, the measure of relevancy is determined as a function of a measure of interactions and a measure of impressions.
Some embodiments perform lightweight monitoring for garbage collection (GC) flow events, then perform focused tracing after detecting a performance problem signal. The tracing is focused by constraints which are specified in a designation data structure, including a performance problem signal definition, a corresponding trace data category and a corresponding tracing stop trigger. Tracing is done only in the specified category and only for the specified time period, to reduce or avoid collection of irrelevant trace data and to reduce or avoid changes in program behavior caused by the tracing itself. Some designation data structures also specify a corresponding trace data analysis. In operation, some embodiments dynamically re-focus tracing on an offshoot trace in response to a trace data analysis result obtained while the program is still executing.
Non-limiting examples of the present disclosure describe implementation of an exemplary synchronization protocol to identify file data for synchronization as well as negotiate how to achieve data transport for synchronization of the file data. In one example, a request for synchronization of data is received from a processing device. In response to receiving the request, a response is generated. The response may comprise: identification of file data for synchronization, instructions for accessing the file data and instructions indicating a data transport protocol to utilize to obtain the file data. The response may be transmitted to the processing device, for example, to enable the processing device to synchronize file data. Other examples are also described.
H04L 67/1095 - Réplication ou mise en miroir des données, p. ex. l’ordonnancement ou le transport pour la synchronisation des données entre les nœuds du réseau
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
G06F 16/178 - Techniques de synchronisation des fichiers dans les systèmes de fichiers
H04L 67/02 - Protocoles basés sur la technologie du Web, p. ex. protocole de transfert hypertexte [HTTP]
H04L 67/06 - Protocoles spécialement adaptés au transfert de fichiers, p. ex. protocole de transfert de fichier [FTP]
H04L 67/561 - Ajout de données fonctionnelles à l’application ou de données de commande de l’application, p. ex. métadonnées
A computing system including a topological quantum computing device, including a plurality of Majorana islands that form a plurality of physical qubits. The computing system further includes a controller configured to, for each of the physical qubits, in a measurement-based qubit benchmarking (MBQB) stage, determine an error metric value of a qubit error metric associated with the physical qubit. Determining the error metric value includes, at the Majorana island that forms the physical qubit, performing a Pauli measurement sequence including a plurality of Pauli measurements. Determining the error metric value further includes computing the error metric value based at least in part on respective results of the plurality of Pauli measurements. The controller is further configured to output the error metric value.
G06N 10/70 - Correction, détection ou prévention d’erreur quantique, p. ex. codes de surface ou distillation d’état magique
G06N 10/40 - Réalisations ou architectures physiques de processeurs ou de composants quantiques pour la manipulation de qubits, p. ex. couplage ou commande de qubit
H10N 69/00 - Dispositifs intégrés, ou ensembles de plusieurs dispositifs, comportant au moins un élément supraconducteur couvert par le groupe
Some embodiments construct a set of build dependencies for a program without access to a full set of build instructions. When multiple clashing name resolutions are identified for a particular dependency, a union of the alternative versions is formed. Intermediate representations of the union of program versions, such as symbol tables, abstract syntax trees, and other internal compiler data structures, are emitted to persistent non-volatile storage, instead of using a single resolution to create temporary intermediate data to build an executable program. Security analysis and licensing analysis utilize the persisted program representations to analyze the union of multiple overlapping but different versions of the program.
This disclosure describes utilizing a generative document system to dynamically build and provide generative text documents using one or more generative artificial intelligence (AI) models. For example, the generative document system efficiently utilizes various systems and one or more generative AI models to determine intents and topics, curate topic sections, and generate a generative text document that includes a directed answer along with select curated topic sections for search queries. In various implementations, the generative document system performs additional actions that enhance the efficiency and accuracy of operations used to produce generative text documents. Additionally, in many cases, these generative text documents provide a foundation for providing an interactive, intuitive, wide-ranging, and flexible curation of answers to users that address the corresponding search queries.
G06F 16/2457 - Traitement des requêtes avec adaptation aux besoins de l’utilisateur
G06F 16/215 - Amélioration de la qualité des donnéesNettoyage des données, p. ex. déduplication, suppression des entrées non valides ou correction des erreurs typographiques
G06F 16/9538 - Présentation des résultats des requêtes
10.
META-REFLECTION TECHNIQUES FOR LEARNING INSTRUCTIONS FOR LANGUAGE AGENTS USING PAST SELF-REFLECTIONS
A data processing system implements accessing a datastore of training data using a model training unit to obtain a first training sample, the first training sample comprising a first natural language utterance, first ground truth information, the first natural language utterance requesting that content be generated by a language model, the first ground truth information providing a first example of first expected output of the language model in response to the first natural language utterance; constructing a first prompt based on the first natural language utterance using a prompt construction unit; providing, using the prompt construction unit, the first prompt to the language model as an input to cause the language model to generate a first output; analyzing the first output and the first ground truth information using the model training unit to determine whether the first output is erroneous; constructing, using the prompt construction unit, a second prompt that instructs the language model to generate a first self-reflection response that indicates why the language model generated the first output; providing the second prompt as an input to the language model to cause the language model to generate the first self-reflection response; constructing, using the prompt construction unit, a third prompt that includes the first self-reflection response, the third prompt instructing the language model to generate prompt improvement instructions to be included in subsequently constructed prompts for the language model to assist the language model in generating a correct response to the subsequently constructed prompts; providing the third prompt to the language model to cause the language model to generate the prompt improvement instructions; and including the prompt improvement instructions in the subsequently constructed prompts generated using the prompt construction unit.
Systems and methods are described for identifying and resolving performance issues of automated components. The automated components are segmented into groups by applying a K-means clustering algorithm thereto based on segmentation feature values respectively associated therewith, wherein an initial set of centroids for the K-means clustering algorithm is selected by applying a set of context rules to the automated components. Then, for each group, a performance ranking is generated based at least on a set of performance feature values associated with each of the automated components in the group and a feature importance value for each of the performance features. The feature importance values are determined by training a machine learning based classification model to classify automated components into each of the groups, wherein the training is performed based on the respective performance feature values of the automated components and the respective groups to which they were assigned.
Technologies related to constructing a scrollable feed of electronic content items are described, where electronic content items in the scrollable feed of electronic content items are arranged in groups in the scrollable feed. Clusters of electronic content items are generated based upon previous user interactions with electronic content items, and representations of the clusters are formed. Based upon a request for a scrollable feed, electronic content items that are of interest to the user are identified and assigned to the clusters based upon the representations of the clusters, thereby forming groups of electronic content items. The scrollable feed is caused to be displayed, where the feed includes the groups.
Systems and methods for automatic service discovery and inter-service communications in a peer-to-peer network are disclosed. A device connected to a peer-to-peer network multicasts information about services that are active on the device (e.g., local services) and accessible to other devices on the network. The device can receive, from other devices on the network, multicasted information about services that are active on the other devices (e.g., remote services). The device includes a registry for maintaining information about available local and remote services. Each device in the network can send queries to active services on other devices on the network and receive notifications from those remote services when the remote services identify relevant data (e.g., data that satisfies the query).
H04L 67/145 - Interruption ou inactivation de sessions, p. ex. fin de session contrôlée par un événement en évitant la fin de session, p. ex. maintien en vie, battements de cœur, message de reprise ou réveil pour une session inactive ou interrompue
14.
TECHNIQUES FOR ROBUST REAL-TIME MULTIPLE-OBJECT TRACKING WITH DETECTION PROPAGATION AND PER-CLASS OPTIMIZATION
A data processing system implements obtaining a frame of video content at an object detection pipeline, the video content comprising a plurality of frames; analyzing the frame using an object detection model to detect a plurality of objects and associate each object with a confidence score; performing a primary matching operation on high confidence detection objects to associate the high confidence detection objects with an object track of a plurality of object tracks, the high confidence detection objects being objects associated with a confidence score that satisfies a confidence threshold; performing a secondary matching operation on low confidence detection objects to associate the low confidence detection objects with an object track of the plurality of object tracks, low confidence detection objects being objects associated with a confidence score that does not satisfy the confidence threshold; and outputting the plurality of object tracks.
A transmission method comprising: encoding a message into error correction coding blocks, each comprising a different portion of the payload bits of the message and a respective plurality of error correction bits; and applying an interleaving map to error correction coding blocks, such that for each error correction coding block, different bits of the error correction coding block are mapped to different channels, each channel thus being used to transmit bits from a respective selection of multiple of the error correction coding blocks. Thus, for at least one channel failure scenario in which at least one of the plurality of channel fails leaving a plurality of remaining channels, enough bits will remain on the remaining channels to enable correction of the message based on the error correction coding.
Some embodiments confirm that a natural language request from a user relates to garbage collection or to an application performance problem that sometimes involves garbage collection. Some embodiments also check the user request for malicious injections, and some also check garbage collection trace data for sufficiency. Some embodiments build a prompt, computed from the user request and a predefined prompt template, such as a “garbage collection question-and-answer with context” template, a “performance rules elucidation” template, an “exploratory data analysis” template, or an “end-to-end garbage collection chat” template. Some prompt templates specify an agent role, and some specify sections or output formats for a response. The prompt is submitted to an artificial intelligence agent, such as a large language model, and the agent's response is used to make a garbage collection insight that is then presented to the user.
Methods, systems, and apparatuses include receiving, via a conversational interface, user input from a user of an online system. A user input embedding is generated for the user input. A vector store is retrieved including tool description embeddings. A similarity search is performed using the user input embedding and the tool description embeddings. A set of tool descriptions is determined using results of the similarity search. A prompt is generated using the set of tool descriptions and the user input. Machine learning agents are applied to the prompt to cause the machine learning agents to use tools associated with the set of tool descriptions. A response to the prompt is received, from the machine learning agents, in response to the machine learning agents using the tools. An output to the user input based on the response is sent, via the conversational interface, to the user of the online system.
G06N 3/006 - Vie artificielle, c.-à-d. agencements informatiques simulant la vie fondés sur des formes de vie individuelles ou collectives simulées et virtuelles, p. ex. simulations sociales ou optimisation par essaims particulaires [PSO]
18.
TARGET PROPERTY SELECTION TECHNIQUES FOR LEARNING WHAT TO TEACH LANGUAGE MODELS FOR CODE GENERATION
A data processing system implements extracting symbolic property information from a training dataset by analyzing the training dataset with a symbolic property mining pipeline to extract properties of program code from one-shot program code examples, the symbolic property information indicative of types of properties of the one-shot program code examples determined to improve program code output by a large language model (LLM) in response to natural language utterances; and training a property recognition model to recognize symbolic properties associated with a natural language utterance using the training dataset and the symbolic property information, the property recognition model being configured to analyze the natural language utterance and to output the symbolic properties of program code.
Some embodiments find locations of targets which are related to a symbol in a source code snippet, when the snippet is external to a project codebase. Finding a target's location allows user-directed or proactive automatic navigation from the symbol into the codebase, display of data type, signature, and other semantic information of the symbol, proactive automatic creation of an import statement for a definition of the symbol, and other utilizations of the target location in an enhanced editor or enhanced debugger or another tool. In some scenarios, the external snippet is generated by an artificial intelligence agent, using part of the codebase as context. Some embodiments find a target of an external snippet's symbol in another external snippet, allowing a tool utilization that is informed by the project codebase even when both snippets are outside the project codebase.
A system-integrated solution for input/output device extended-control without operating system driver involvement is provided. In examples, systems and methods include receiving an out-of-band request from a human interface device, receiving an in-band request from the human interface device, and sending a signal directly to a peripheral device, via a micro-control unit, to honor the out-of-band request. In examples, the out-of-band request is received via an inter-integrated circuit.
Approximating a more complex multi-objective feed item scoring model using a less complex single objective feed item scoring model in a multistage feed ranking system of an online service. The disclosed techniques can facilitate multi-objective optimization for personalizing and ranking feeds including balancing personalizing a feed for viewer experience, downstream professional or social network effects, and upstream effects on content creators. The techniques can approximate the multi-objective model-that uses a rich set of machine learning features for scoring feed items at a second pass ranker in the ranking system-with the more lightweight, single objective model-that uses fewer machine learning features at a first pass ranker in the ranking system. The single objective model can more efficiently score a large set of feed items while maintaining much of the multi-objective model's richness and complexity and with high recall at the second pass ranking stage.
G06N 20/20 - Techniques d’ensemble en apprentissage automatique
G06F 18/2113 - Sélection du sous-ensemble de caractéristiques le plus significatif en classant ou en filtrant l'ensemble des caractéristiques, p. ex. en utilisant une mesure de la variance ou de la corrélation croisée des caractéristiques
G06F 18/214 - Génération de motifs d'entraînementProcédés de Bootstrapping, p. ex. ”bagging” ou ”boosting”
Various embodiments discussed herein relate to using one or more language models and/or mapping platforms to generate a response to a natural language question or command regarding geographical information associated with a mapping platform. In response to receiving such natural language question or command, some embodiments first extract contextual data. Based at least in part on the extracting of the contextual data, various embodiments then provide the contextual data and the natural language command or question as input into one or more language models such that the one or more language models and/or mapping platforms generate a response. Some embodiments then cause presentation of an indication associated with the response.
A network element receives a call between a first and second party on an original call path. The call comprises a signaling flow and a media flow. The network element determines that a server is to be added to the call, and then adds the server to the call by redirecting the media flow from the network element via the server. Determining that the server is to be removed from the call is achieved by any of: receiving a request from an orchestrator, receiving information about a failure of the server, receiving information about a performance drop in a communications link between the server and the network element. In response to determining that the server is to be removed from the call, the network element then returns the media flow to the original path of the call. The signaling flow is maintained on the original path.
A method, computer program product, and computing system for generating an internal state prompt with medical content and a multi-action task to perform on a healthcare system. A first output healthcare system command is generated by processing the internal state prompt using a trained multimodal generative artificial intelligence (AI) model. The first output healthcare system command is converted into a first healthcare system-executable command associated with the multi-action task for a first target healthcare subsystem. Modified medical content is generated by executing the first healthcare system-executable command on the medical content using the first target healthcare subsystem. The internal state prompt is updated with the modified medical content generated by executing the first healthcare system-executable command and the first output healthcare system command listed as a past action performed during execution of the multi-action task.
G16H 50/20 - TIC spécialement adaptées au diagnostic médical, à la simulation médicale ou à l’extraction de données médicalesTIC spécialement adaptées à la détection, au suivi ou à la modélisation d’épidémies ou de pandémies pour le diagnostic assisté par ordinateur, p. ex. basé sur des systèmes experts médicaux
G16H 15/00 - TIC spécialement adaptées aux rapports médicaux, p. ex. leur création ou leur transmission
G16H 30/40 - TIC spécialement adaptées au maniement ou au traitement d’images médicales pour le traitement d’images médicales, p. ex. l’édition
Example solutions for clustering data include: encoding a plurality of source records into a plurality of vectors, each source record containing source terms being encoded as one vector of the plurality of vectors; computing a similarity score for each unique pair of vectors of the plurality of vectors; constructing a similarity graph by: adding a node to the similarity graph for each vector; and adding an edge between each pair of nodes in which the similarity score for the associated pair of vectors exceeds a first similarity threshold; identifying a cluster of nodes within the similarity graph, the cluster of nodes representing a disconnected subgraph within the similarity graph; and generating a graphical representation of the one or more clusters on a display device.
Examples of the present disclosure describe systems and methods for automating the identification of events in a text file. In examples, a computing system identifies a subset of a text file that comprises an unknown event using a set of rules. Each rule of the set of rules specifying a first pattern of characters is compared to the subset of the first text file. When the set of rules does not identify the unknown event, the subset of the text file is provided to a language model to generate a new rule with a second pattern of characters and an identifier of the new rule. The system then generates an updated set of rules by adding the new rule to the set of rules.
G06N 5/025 - Extraction de règles à partir de données
G06F 18/2415 - Techniques de classification relatives au modèle de classification, p. ex. approches paramétriques ou non paramétriques basées sur des modèles paramétriques ou probabilistes, p. ex. basées sur un rapport de vraisemblance ou un taux de faux positifs par rapport à un taux de faux négatifs
27.
Rejection Sampling For Polynomial Coefficient Generator
A polynomial coefficient generator includes a random number generator to generate a random bit string. A buffer is coupled to receive bits of the random bit string and a rejection sampler is coupled to the buffer to receive n+1 sets of p bits of buffered bits of the random bit string, where n is an integer having a value of at least four, and sample each set of n bits in parallel to identify valid sets of p bits A valid coefficient queue is coupled to receive the valid sets of p bits, and a polynomial multiplier is coupled to receive the valid sets of p bits from the valid coefficient queue. A method uses the generator to generate valid coefficients.
A system may include a radar-based tracking system, an image-based tracking system, one or more processors, and one or more computer-readable recording media that store instructions that are executable by the one or more processors to configure the system to: (i) obtain, via the radar-based tracking system, radar-based measurement data; (ii) utilize the radar-based measurement data as input to an event detection module to generate event detection output; and (iii) when the event detection output satisfies one or more conditions, selectively activate the image-based tracking system to enable acquisition of image-based tracking data to facilitate positional tracking of an object.
G01S 13/536 - Discrimination entre objets fixes et mobiles ou entre objets se déplaçant à différentes vitesses utilisant la transmission d'ondes continues non modulées, ou modulées en amplitude, en fréquence ou en phase
G01S 13/56 - Discrimination entre objets fixes et mobiles ou entre objets se déplaçant à différentes vitesses pour la détection de présence
The present invention relates to telecommunications and specifically to methods and systems for enabling Internet-based telephone calls using Over-the-Top (OTT) communication services while ensuring compliance with jurisdictional regulatory requirements. Described herein are techniques for binding a non-mobile device, such as a laptop or tablet, to a mobile device capable of providing real-time location information. The binding is established through a local proximity network using short-range wireless communication protocols. The mobile device transmits its connectivity status and location data to either the non-mobile device or a cloud-based communication service, which then acts as a gatekeeper to permit or inhibit call initiation based on the received status information. This system ensures that calls made from the non-mobile device can be accurately located in real-time, facilitating compliance with regulations that mandate location verification for emergency services and lawful intercepts.
H04L 65/1069 - Établissement ou terminaison d'une session
H04L 67/125 - Protocoles spécialement adaptés aux environnements propriétaires ou de mise en réseau pour un usage spécial, p. ex. les réseaux médicaux, les réseaux de capteurs, les réseaux dans les véhicules ou les réseaux de mesure à distance en impliquant la commande des applications des terminaux par un réseau
H04M 1/72412 - Interfaces utilisateur spécialement adaptées aux téléphones sans fil ou mobiles avec des moyens de soutien local des applications accroissant la fonctionnalité par interfaçage avec des accessoires externes utilisant des interfaces sans fil bidirectionnelles à courte portée
H04W 4/02 - Services utilisant des informations de localisation
H04W 4/80 - Services utilisant la communication de courte portée, p. ex. la communication en champ proche, l'identification par radiofréquence ou la communication à faible consommation d’énergie
H04W 88/04 - Dispositifs terminaux adapté à la retransmission à destination ou en provenance d'un autre terminal ou utilisateur
30.
HYBRID MACHINE LEARNING MODEL ENVIRONMENT WITH HOMOMORPHIC ENCRYPTION
The technology described herein is related to a hybrid neural network that divides operations of a neural network layer between a server and a client device. In an aspect, one or more liner operations of a neural network layer are performed on the client, while non-linear operations, such as an activation function, are performed on the server. In an aspect, the technology described herein maintains network security by encrypting portions of the client-side components. The encrypted portions may be learned values, which may also be described as learned parameters. In aspects, homomorphic encryption is used.
Examples of the present disclosure describe systems and methods for automating the identification of events in a text file. In examples, a computing system identifies a subset of a text file that comprises an unknown event using a set of rules. Each rule of the set of rules specifying a first pattern of characters is compared to the subset of the first text file. When the set of rules does not identify the unknown event, the subset of the text file is provided to a language model to generate a new rule with a second pattern of characters and an identifier of the new rule. The system then generates an updated set of rules by adding the new rule to the set of rules.
G06F 40/216 - Analyse syntaxique utilisant des méthodes statistiques
G06F 11/07 - Réaction à l'apparition d'un défaut, p. ex. tolérance de certains défauts
G06F 11/22 - Détection ou localisation du matériel d'ordinateur défectueux en effectuant des tests pendant les opérations d'attente ou pendant les temps morts, p. ex. essais de mise en route
G06F 11/34 - Enregistrement ou évaluation statistique de l'activité du calculateur, p. ex. des interruptions ou des opérations d'entrée–sortie
G06F 40/284 - Analyse lexicale, p. ex. segmentation en unités ou cooccurrence
Devices and methods for estimating localization lengths in hybrid superconductor-semiconductor quantum (HSSQ) devices are described. A method for estimating localization lengths in an HSSQ device comprising a set of plunger gates formed in a first layer of the HSSQ device and a set of top gates formed, above the set of plunger gates, in a second layer of the HSSQ device, includes obtaining measurements of nonlocal conductance values associated with the HSSQ device. The at least one junction associated with the HSSQ device attenuates one or more of the measured nonlocal conductance values associated with the HSSQ device. The method further includes normalizing the measured nonlocal conductance values to remove an effect of the attenuation caused by the at least one junction and extracting localization lengths based on the normalized nonlocal conductance values. The method further includes, using a processor, estimating the localization lengths for the HSSQ device.
The techniques described herein automatically correlate the health of cloud resources to a broader health determination for an entity executing within, or supported by, a distributed computing environment. In contrast to the typical manual analysis that is required to make a broader health determination for a specific entity, the techniques generate and use a standard health model that can be applied, or scaled, to detect unhealthy scenarios across a variety of different entities with different owners (e.g., different tenants and/or different cloud resource providers). Furthermore, to meet varying owner perspectives on health, the techniques include a layer on top of the standard health model that enables an owner to provide input that customizes the standard health model for their own entity.
H04L 43/045 - Traitement des données de surveillance capturées, p. ex. pour la génération de fichiers journaux pour la visualisation graphique des données de surveillance
H04L 43/0817 - Surveillance ou test en fonction de métriques spécifiques, p. ex. la qualité du service [QoS], la consommation d’énergie ou les paramètres environnementaux en vérifiant la disponibilité en vérifiant le fonctionnement
A data processing system includes a processor, and a memory storing executable instructions which, when executed by the processor, causes the processor, alone or in combination with other processors, to implement: a united data platform for extracting data from a software release pipeline for specific software; a software change insights module to generate insights into changes to the specific software on a per build basis using the extracted data; a deployment insights module to generate deployment insights using the extracted data; and a dashboard to organize the generated insights and intelligently route deployment of a build to upgrade the specific software based on the generated insights to expedite deployment.
A data processing system implements constructing a first prompt including a font mask of a reference character (RC) and a style prompt, sending the first prompt to a text2image model to iteratively generate salient content and concentrate the salient content within the font mask of RC as a first image of RC; concatenating two of the first images as a second image; generating a combined font mask of the font mask of RC and a font mask of a target character (TC); constructing a second prompt including the combined font mask and the second image, sending the second prompt to the model to iteratively generate salient content and in-paint the salient content within a half of the combined font mask as a third image of RC and TC; cropping a styled TC image from the third image using the font mask of TC; providing the styled TC image to a client device.
Embodiments of the disclosed technologies are capable of generating an input of a set of input-output pairs using a first large language model (LLM) and a domain-specific training content. The set of input-output pairs is used to train a second LLM during supervised learning to perform a downstream task. The embodiments describe generating an output corresponding to the input of the set of input-output pairs using the first LLM and the domain-specific training content. The output includes reasoning by the first LLM contributing to the performing of the downstream task. The embodiments further describe training the second LLM to perform the downstream task using the set of input-output pairs and the reasoning.
A cryptographic accelerator utilizes a combination of parallel and pipelined butterfly operator circuit to perform number theoretic transform (NTT) or inverse NTT (INTT.) The accelerator includes a first set of pipelined pairs of parallel butterfly operator circuits configured to operate on pairs of polynomial coefficients to provide output coefficients. A first buffer is coupled to store the output coefficients. A second set of pipelined pairs of parallel butterfly operator circuits are configured to operation on pairs of coefficients obtained from the first buffer to provide coefficients of the polynomial in a number theoretic transform (NTT) domain or out of the NTT domain.
Techniques are described herein that are capable of performing AI-based generation of a computer program using compiler-gathered semantic information about target code. A user-generated request that requests information about target code is converted into an AI prompt, which requests that the AI model generate a computer program to determine the information. An AI model is caused to generate the computer program, which comprises configuring the computer program to determine, at runtime of the computer program, the information using semantic information about the target code gathered by a compiler and provided to the computer program by an API, by providing the AI prompt as an input to the AI model. A response to the AI prompt that includes the computer program is received from the AI model. Presentation of a representation of the computer program and/or automatic execution of the computer program against the target code is triggered.
Example implementations include a method, apparatus, and computer-readable medium configured for controlling access to computer resources based on animations. A computer system transmits at least three animations for display to the user on a client device. At least a first animation and a second animation are associated with a respective correct label and at least a third animation is not associated with a correct label. The computer system receives an input of a respective selected label for each of the animations. The computer system determines whether to allow access to the user based on the respective selected labels of the first animation and the second animation being the respective correct labels. The computer system associates the respective selected label for the third animation with the third animation as a potential label for the third animation in response to determining to allow access to the user.
Systems and methods are provided for implementing deep search functionality using large language models (“LLMs”). In various examples, a computing system uses at least one LLM to generate intents based on a user query, to generate alternative queries based on a selected or identified primary intent, and to generate a relevance score for each search result that is obtained from a search utility (e.g., an Internet search engine, a file storage search utility, an email search utility, or a document storage search utility) in response to a primary query (corresponding to the primary intent) and the generated alternative queries being entered into the search utility. The search results from the search utility are sorted based on the corresponding generated relevance scores, and the sorted search results are caused to be displayed to the user as a deep search response to the user query.
A technique trains a student language model by: obtaining a source item that contains content; generating plural tasks based on the content using a group of example-generating agents; transforming the plural tasks into plural teacher-generated responses using a teacher language model; transforming the plural tasks into student-generated responses using the student language model; and updating parameters of the student language model based on the student-generated responses and corresponding teacher-generated responses. The teacher language model performs operations to enhance the accuracy of the teacher-generated responses, but the student language model is only exposed to the teacher-generated responses themselves. The updating of the student language model's parameters involves consulting the teacher model to verify the suitability of one or more candidate student-generated responses. In some implementations, the technique performs optimization to find a Nash equilibrium given preference information, without implicit or explicit reward maximization.
A method, computer program product, and computing system for processing a query for obtaining data from an unstructured database. A parsed representation of a query field of the query is generated by parsing the query field from the query. A fuzzified representation of the query field is generated by fuzzifying the parsed representation of the query field. A vectorized representation of the query field is generated by vectorizing the fuzzified representation of the field. A matching input field is identified from the unstructured database by processing the vectorized representation of the query field. The matching input field is scored based upon, at least in part, weighting from a domain model. A weighted result is provided to the query using the scoring of the matching input field.
G06F 16/383 - Recherche caractérisée par l’utilisation de métadonnées, p. ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement utilisant des métadonnées provenant automatiquement du contenu
The techniques describe expanding a scope of a search to a domain associated with a guest application that is external to a domain associated with a host application. The host application receives a search query via a search field displayed in a first portion of a graphical user interface that is managed by the host application. The host application determines that the guest application is currently managing a second portion of the graphical user interface at a time when the search query is received. The host application determines that the guest application has registered with a search handler exposed by the host application to enable the domain associated with the guest application to be scoped for the search. The host application then passes the search query to the guest application, thereby enabling the guest application to render a search results page via the second portion of the graphical user interface.
Methods, systems, and apparatuses include receiving, via a conversational interface, user input from a user of an online system. A user input embedding is generated for the user input. A vector store is retrieved including tool description embeddings. A similarity search is performed using the user input embedding and the tool description embeddings. A set of tool descriptions is determined using results of the similarity search. A prompt is generated using the set of tool descriptions and the user input. Machine learning agents are applied to the prompt to cause the machine learning agents to use tools associated with the set of tool descriptions. A response to the prompt is received, from the machine learning agents, in response to the machine learning agents using the tools. An output to the user input based on the response is sent, via the conversational interface, to the user of the online system.
G06F 16/383 - Recherche caractérisée par l’utilisation de métadonnées, p. ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement utilisant des métadonnées provenant automatiquement du contenu
48.
CONFIDENCE ENHANCEMENT FOR RESPONSES BY DOCUMENT-BASED LARGE LANGUAGE MODELS
Systems and methods are provided for implementing confidence enhancement for responses by document-based large language models (“LLMs”) or other AI/ML systems. A first prompt is generated based on data items that are previously received or accessed. The first prompt is used by a first LLM or AI/ML system to extract requested information from the data items. One or more citations are generated and presented within a structured object together with a representation of the extracted information, in some cases, as output from a second LLM or AI/ML system. In some cases, the citations and/or the representation may be verified by a third LLM or AI/ML system, and reliability indicators may be generated for the citations and/or the representation based on determined accuracy of the citations and/or the representation. In this manner, the common issue of hallucinations may be mitigated.
G06F 16/3329 - Formulation de requêtes en langage naturel
G06F 16/338 - Présentation des résultats des requêtes
G06F 21/54 - Contrôle des utilisateurs, des programmes ou des dispositifs de préservation de l’intégrité des plates-formes, p. ex. des processeurs, des micrologiciels ou des systèmes d’exploitation au stade de l’exécution du programme, p. ex. intégrité de la pile, débordement de tampon ou prévention d'effacement involontaire de données par ajout de routines ou d’objets de sécurité aux programmes
G06F 21/62 - Protection de l’accès à des données via une plate-forme, p. ex. par clés ou règles de contrôle de l’accès
Methods, systems, and computer program products are provided for dynamically reconfigurable tuning for wireless power and data communications. A wireless charging (WLC) device may improve the efficiency of variable power and data communication to a chargeable device with variable relative positioning and coupling in 3D space by dynamically reconfiguring transmitter tuning. A WLC transmitter may be dynamically reconfigured (e.g., between symmetric and asymmetric antenna impedance matching) based on at least one of the type of wireless transmission or a wireless transmission efficiency for the type of wireless transmission. For example, the controller may dynamically select a configuration for wireless power (e.g., or data) transmission based on the most efficient configuration determined from dynamically measured efficiencies for asymmetric and symmetric wireless power (e.g., or data) transmission. Tuning may be dynamically reconfigured, for example, by controlling an automatically variable inductor (e.g., comprising at least one ring switch) to automatically vary inductance.
H04B 5/79 - Systèmes de transmission en champ proche, p. ex. systèmes à transmission capacitive ou inductive spécialement adaptés à des fins spécifiques pour le transfert de données en combinaison avec le transfert d'énergie
H02J 50/10 - Circuits ou systèmes pour l'alimentation ou la distribution sans fil d'énergie électrique utilisant un couplage inductif
H02J 50/80 - Circuits ou systèmes pour l'alimentation ou la distribution sans fil d'énergie électrique mettant en œuvre l’échange de données, concernant l’alimentation ou la distribution d’énergie électrique, entre les dispositifs de transmission et les dispositifs de réception
H04B 5/26 - Couplage inductif utilisant des bobines
50.
SYSTEMS AND METHODS FOR GPT GUIDED NEURAL PUNCTUATION FOR CONVERSATIONAL SPEECH
Some disclosed embodiments are directed to obtaining a decoded audio data including a spoken language utterance recognized in audio data and identifying a disfluency in the decoded audio data. Upon determining that correcting the disfluency would improve a readability score of the decoded audio data, the system generates a particular correction to correct the disfluency and applies the particular correction to the decoded audio data. Then, an updated decoded audio data is generated which reflects the particular correction. The updated decoded audio data has improved readability over the decoded audio data.
G10L 15/26 - Systèmes de synthèse de texte à partir de la parole
G10L 15/01 - Estimation ou évaluation des systèmes de reconnaissance de la parole
G10L 15/06 - Création de gabarits de référenceEntraînement des systèmes de reconnaissance de la parole, p. ex. adaptation aux caractéristiques de la voix du locuteur
G10L 19/00 - Techniques d'analyse ou de synthèse de la parole ou des signaux audio pour la réduction de la redondance, p. ex. dans les vocodeursCodage ou décodage de la parole ou des signaux audio utilisant les modèles source-filtre ou l’analyse psychoacoustique
51.
SYSTEM AND METHOD FOR AUTOMATIC HYPERPARAMETER SELECTION FOR ONLINE LEARNING
Systems and methods for tuning hyperparameters for a machine learning model using a challenger champion model are described. A set of challenger configurations are generated based on a hyperparameter for tuning and a subset of the set of challenger configurations are scheduled for evaluation based on a loss function. A loss value derived from the loss function for the challenger configurations is compared to a loss value derived from the loss function for a champion configuration, and the champion configuration is replaced with the challenger configuration based on the comparison of the loss value derived from the loss function for the challenger configuration and the loss value derived from the loss function for the champion configuration. When the champion is replaced, a new set of challenger configurations is generated based on the new champion configuration.
Innovations in intra block copy (“BC”) prediction as well as innovations in encoder-side search patterns and approaches to partitioning are described herein. For example, some of the innovations relate to use of asymmetric partitions for intra BC prediction. Other innovations relate to search patterns or approaches that an encoder uses during block vector estimation (for intra BC prediction) or motion estimation. Still other innovations relate to uses of BV search ranges that have a horizontal or vertical bias during BV estimation.
H04N 19/52 - Traitement de vecteurs de mouvement par encodage par encodage prédictif
H04N 19/105 - Sélection de l’unité de référence pour la prédiction dans un mode de codage ou de prédiction choisi, p. ex. choix adaptatif de la position et du nombre de pixels utilisés pour la prédiction
H04N 19/11 - Sélection du mode de codage ou du mode de prédiction parmi plusieurs modes de codage prédictif spatial
H04N 19/119 - Aspects de subdivision adaptative, p. ex. subdivision d’une image en blocs de codage rectangulaires ou non
H04N 19/159 - Type de prédiction, p. ex. prédiction intra-trame, inter-trame ou de trame bidirectionnelle
H04N 19/167 - Position dans une image vidéo, p. ex. région d'intérêt [ROI]
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
H04N 19/51 - Estimation ou compensation du mouvement
H04N 19/593 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif mettant en œuvre des techniques de prédiction spatiale
H04N 19/61 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant un codage par transformée combiné avec un codage prédictif
H04N 19/70 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par des aspects de syntaxe liés au codage vidéo, p. ex. liés aux standards de compression
53.
ENCODER-SIDE SEARCH RANGES HAVING HORIZONTAL BIAS OR VERTICAL BIAS
Innovations in intra block copy (“BC”) prediction as well as innovations in encoder-side search patterns and approaches to partitioning are described herein. For example, some of the innovations relate to use of asymmetric partitions for intra BC prediction. Other innovations relate to search patterns or approaches that an encoder uses during block vector estimation (for intra BC prediction) or motion estimation. Still other innovations relate to uses of BV search ranges that have a horizontal or vertical bias during BV estimation.
H04N 19/52 - Traitement de vecteurs de mouvement par encodage par encodage prédictif
H04N 19/105 - Sélection de l’unité de référence pour la prédiction dans un mode de codage ou de prédiction choisi, p. ex. choix adaptatif de la position et du nombre de pixels utilisés pour la prédiction
H04N 19/11 - Sélection du mode de codage ou du mode de prédiction parmi plusieurs modes de codage prédictif spatial
H04N 19/119 - Aspects de subdivision adaptative, p. ex. subdivision d’une image en blocs de codage rectangulaires ou non
H04N 19/159 - Type de prédiction, p. ex. prédiction intra-trame, inter-trame ou de trame bidirectionnelle
H04N 19/167 - Position dans une image vidéo, p. ex. région d'intérêt [ROI]
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
H04N 19/51 - Estimation ou compensation du mouvement
H04N 19/593 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif mettant en œuvre des techniques de prédiction spatiale
H04N 19/61 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant un codage par transformée combiné avec un codage prédictif
H04N 19/70 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par des aspects de syntaxe liés au codage vidéo, p. ex. liés aux standards de compression
54.
ADJUSTING QUANTIZATION/SCALING AND INVERSE QUANTIZATION/SCALING WHEN SWITCHING COLOR SPACES
Innovations in adaptive encoding and decoding for units of a video sequence can improve coding efficiency when switching between color spaces during encoding and decoding. For example, some of the innovations relate to adjustment of quantization or scaling when an encoder switches color spaces between units within a video sequence during encoding. Other innovations relate to adjustment of inverse quantization or scaling when a decoder switches color spaces between units within a video sequence during decoding.
H04N 9/64 - Circuits pour le traitement de signaux de couleur
H04N 19/136 - Caractéristiques ou propriétés du signal vidéo entrant
H04N 19/174 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une tranche, p. ex. une ligne de blocs ou un groupe de blocs
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
H04N 19/186 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une couleur ou une composante de chrominance
55.
QUANTUM DEVICES FORMED FROM A SINGLE SUPERCONDUCTING WIRE HAVING A CONFIGURABLE GROUND CONNECTION
Quantum devices formed from a single superconducting wire having a configurable ground connection are described. An example quantum device, configurable to be grounded, comprises a single superconducting wire having at least a first section and a second section, each of which is configurable to be in a topological phase and at least a third section configurable to be in a trivial phase. The quantum device further comprises semiconducting regions formed adjacent to the single superconducting wire, where the single superconducting wire is configurable to store quantum information in at least four Majorana zero modes (MZMs). The semiconducting regions formed adjacent to the single superconducting wire may be used to measure quantum information stored in the at least four MZMs.
H10D 30/47 - Transistors FET ayant des canaux à gaz de porteurs de charge de dimension nulle [0D], à une dimension [1D] ou à deux dimensions [2D] ayant des canaux à gaz de porteurs de charge à deux dimensions, p. ex. transistors FET à nanoruban ou transistors à haute mobilité électronique [HEMT]
The disclosure herein describes reducing training bias in outputs generated by a generative language model. A communication segment associated with a communication is obtained by at least one processor of a generative language model. An output value associated with the communication segment is generated by the generative language model. The output value is mapped to a set of training bias values associated with the generative language model and based on the mapping of the output value to a training bias value of the set of training bias values, an alternative output value is generated. The alternative output value is used in a generated segment output for the communication segment. The accuracy of segment outputs generated by the generative language model is improved through reducing or eliminating its training biases.
G10L 15/06 - Création de gabarits de référenceEntraînement des systèmes de reconnaissance de la parole, p. ex. adaptation aux caractéristiques de la voix du locuteur
The present disclosure proposes a method, apparatus and computer program product for image augmentation. A first image and a selected theme may be received. The first image may be augmented based on the selected theme, the augmenting comprising: modifying the color of an object in the first image according to the selected theme, and/or adding element images corresponding to the selected theme in the first image. A second image may be generated based on the augmented first image and the selected theme, the second image being an image with decorative effects corresponding to the selected theme.
A method (100) for a quantum computer (300) is presented. The method (100) comprises receiving (110) a target matrix (338) comprising only real eigenvalues and block encoding the target matrix (338). A polynomial approximation is precomputed (130) for a function to be applied to the target matrix (338). Coefficients are selected (140) for a generating function that match the precomputed polynomial approximation. A polynomial history state (356) is generated (150), the polynomial history state (356) comprising a superposition of polynomials onto the block encoded target matrix by at least mapping the generating function to a quantum algorithm.
Described herein are technologies for automating discovery of a workload deployment, including its components and infrastructure resource dependencies to enable management of complex workloads as a single abstraction. Workload application identification information is identified from a first compute resource hosting a main component of the workload deployment, and a set of automations specific to that workload application identification information is accessed. The set of automations identify all workload components and infrastructure resources on which each of the workload components depend. This data, including relationship data such as dependencies, is compiled into a data structure, that serves as a reference for a workload abstraction. When a management operation is applied to the workload abstraction, the management layer redirects the operation to each of the components or infrastructure resources associated with the workload deployment.
The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various examples for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claims require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed example. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.
G06F 40/58 - Utilisation de traduction automatisée, p. ex. pour recherches multilingues, pour fournir aux dispositifs clients une traduction effectuée par le serveur ou pour la traduction en temps réel
G06V 30/18 - Extraction d’éléments ou de caractéristiques de l’image
61.
ARTIFICIAL INTELLIGENCE (AI)-BASED INCLUSIVE PROMPT RECOMMENDATIONS AND FILTERING
An inclusive prompt recommendation system for generative AI utilizes an inclusive prompt recommendation model to provide recommendations of inclusive language to include in a prompt in order to promote inclusivity and diversity of generated content. The inclusive prompt recommendation model is trained to analyze input text to identify situations, such as gaming, storytelling, social media, projects or presentations for work/school, and like, where the user's intent is to generate an image or description of a person. The model is trained to identify patterns associated with ways users have historically incorporated inclusive terminology intext. The system can include an ethical filtering mechanism for ensuring that prompt recommendations do not have language that directly or indirectly promotes bias and/or stereotypes.
According to examples, a distributed overclocking management system implements decentralized overclocking decisions that allows servers within a rack to locally process overclocking requests of a plurality of virtual machines (VMs) hosted thereon. A Global Workload Intelligence Agent (GWIA) specifies various metrics-based and scheduled-based thresholds for overclocking the plurality of VMs. A Local Workload Intelligence Agent corresponding to a VM collects metrics of interest and based on a signal from the GWIA transmits an overclocking request to a Server Overclocking Agent (SOA) managing overclocking of servers on a rack. Based at least on a rack power budget assigned by a Global Overclocking Agent (GOA), the SOA may grant or deny the overclocking request.
A system, method, and computer-readable media for analyzing network traffic in a communications network are provided. A network analyzer collects a subset of module-level statistics from a plurality of network functions (NFs) within the network. The network analyzer analyzes the module-level statistics to identify a deviation from a normal state of operation. The network analyzer identifies a network stage where a network issue causes the deviation using a machine learning model trained on features associated with the deviation. The network analyzer provides suggestions for debugging, optimizing, or testing the network issue based on the deviation and the identified network stage. The network analyzer can collect additional module-level statistics or packet-level statistics at the identified network stage based on the deviation, confirm the network issue based on the additional module-level statistics or packet-level statistics, and take corrective action to an NF at the network stage to address the network issue.
H04L 43/20 - Dispositions pour la surveillance ou le test de réseaux de commutation de données le système de surveillance ou les éléments surveillés étant des entités virtualisées, abstraites ou définies par logiciel, p. ex. SDN ou NFV
H04L 43/08 - Surveillance ou test en fonction de métriques spécifiques, p. ex. la qualité du service [QoS], la consommation d’énergie ou les paramètres environnementaux
A camera system includes a camera, a privacy indicator configured to indicate an activation status of the camera, and an optical camera blocker disposed between the camera and an external environment. The optical camera blocker is configured to, when activated, obscure images captured by the camera. A privacy control circuit is configured to compare a state of a first circuit trace indicative of an intended operational status of the privacy indicator with a state of a second circuit trace indicative of an actual operational status of the privacy indicator, and based at least in part on detecting a mismatch between the state of the first circuit trace and the state of the second circuit trace, activate the optical camera blocker.
G03B 17/18 - Signaux indiquant l'état d'un organe de l'appareil ou si l'éclairage est convenable
G02F 1/29 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p. ex. commutation, ouverture de porte ou modulationOptique non linéaire pour la commande de la position ou de la direction des rayons lumineux, c.-à-d. déflexion
65.
PROXIMITY-BASED CONTROL FOR VOIP CLIENT AUTHORIZATION
The present invention relates to telecommunications and specifically to methods and systems for enabling Internet-based telephone calls using Over-the-Top (OTT) communication services while ensuring compliance with jurisdictional regulatory requirements. Described herein are techniques for binding a non-mobile device, such as a laptop or tablet, to a mobile device capable of providing real-time location information. The binding is established through a local proximity network using short-range wireless communication protocols. The mobile device transmits its connectivity status and location data to either the non-mobile device or a cloud-based communication service, which then acts as a gatekeeper to permit or inhibit call initiation based on the received status information. This system ensures that calls made from the non-mobile device can be accurately located in real-time, facilitating compliance with regulations that mandate location verification for emergency services and lawful intercepts.
H04W 48/04 - Restriction d'accès effectuée dans des conditions spécifiques sur la base des données de localisation ou de mobilité de l'utilisateur ou du terminal, p. ex. du sens ou de la vitesse de déplacement
H04W 12/104 - Intégrité de la localisation, p. ex. géolocalisation sécurisée
H04W 12/63 - Sécurité dépendant du contexte dépendant de la localisationSécurité dépendant du contexte dépendant de la proximité
H04W 64/00 - Localisation d'utilisateurs ou de terminaux pour la gestion du réseau, p. ex. gestion de la mobilité
66.
PERFORMING COMPUTING OPERATIONS USING A COLLECTIVE PROPERTY OF ELECTROMAGNETIC ENERGY OBSERVED THROUGH TRANSPARENT DISPLAYS BACKGROUND
A computing system includes a plurality of processing units and a plurality of emitters. Each emitter is coupled to at least one of the plurality of processing units and is configured to display an electromagnetic signal at a location of the emitter based on instructions from an associated processing unit. The computing system further includes an electromagnetic sensor configured to detect a collective electromagnetic signal from the plurality of emitters.
G09G 3/34 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p. ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice en commandant la lumière provenant d'une source indépendante
G09G 5/02 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation caractérisés par la manière dont la couleur est visualisée
67.
ARTIFICIAL INTELLIGENCE (AI)-BASED ILLUSTRATED STORY GENERATION SERVICE
An Artificial Intelligence (AI)-based illustrated story generation system enables stylized character profiles to be generated from user photos and illustrated stories to be generated based on narrative prompts which indicate the character profiles to use in the story. To generate the character profiles, the photos are provided to a character description model trained to generate a description of a character based on the photo. The description is then provided to an image generating model which generates a stylized image of the character based on the description. To generate an illustrated story, a narrative prompt and at least one character profile are provided to a story text generating model which generates text for the story. The story text is then provided to an image generating model page by page which generates an illustration for each page based on the page text.
Implementations of the present disclosure provide a solution for spreadsheet table transformation. In this solution, one or more header areas and a data area of a spreadsheet table are detected. A hierarchical structure of each of the header areas is determined by analysis of cell merging and/or indents in the header area, and/or a function relationship between data items in corresponding cells of the data area. The spreadsheet table can be transformed to a relational table based on recognition of the hierarchical structure of the header area. In this way, by facilitating understanding of header structures based on the header hierarchy, it is possible to achieve automated transformation from spreadsheet tables to relational tables.
Innovations in intra block copy (“BC”) prediction as well as innovations in encoder-side search patterns and approaches to partitioning are described herein. For example, some of the innovations relate to use of asymmetric partitions for intra BC prediction. Other innovations relate to search patterns or approaches that an encoder uses during block vector estimation (for intra BC prediction) or motion estimation. Still other innovations relate to uses of BV search ranges that have a horizontal or vertical bias during BV estimation.
H04N 19/52 - Traitement de vecteurs de mouvement par encodage par encodage prédictif
H04N 19/105 - Sélection de l’unité de référence pour la prédiction dans un mode de codage ou de prédiction choisi, p. ex. choix adaptatif de la position et du nombre de pixels utilisés pour la prédiction
H04N 19/11 - Sélection du mode de codage ou du mode de prédiction parmi plusieurs modes de codage prédictif spatial
H04N 19/119 - Aspects de subdivision adaptative, p. ex. subdivision d’une image en blocs de codage rectangulaires ou non
H04N 19/159 - Type de prédiction, p. ex. prédiction intra-trame, inter-trame ou de trame bidirectionnelle
H04N 19/167 - Position dans une image vidéo, p. ex. région d'intérêt [ROI]
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
H04N 19/51 - Estimation ou compensation du mouvement
H04N 19/593 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif mettant en œuvre des techniques de prédiction spatiale
H04N 19/61 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant un codage par transformée combiné avec un codage prédictif
H04N 19/70 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par des aspects de syntaxe liés au codage vidéo, p. ex. liés aux standards de compression
The present disclosure relates to providing personalized service schedule in a computing network for a service provider to provide a service. In particular, the systems described herein utilize signal history of a plurality of users to train a model and to predict a cumulative signal amount for an individual user for a predetermined time frame in the future by drawing inferences from the model. The system described herein further transforms the predicted cumulative signal amounts to activity data and a personalized service schedule for the individual user may be updated based on the predicted activity data. The personalized service schedule may be utilized by disabling the service, or pausing the service or pausing a feature of the service when the predicted activity data indicates that the user is likely to be inactive.
H04L 41/5009 - Détermination des paramètres de rendement du niveau de service ou violations des contrats de niveau de service, p. ex. violations du temps de réponse convenu ou du temps moyen entre l’échec [MTBF]
G06Q 10/1093 - Ordonnancement basé sur un agenda pour des personnes ou des groupes
H04L 41/50 - Gestion des services réseau, p. ex. en assurant une bonne réalisation du service conformément aux accords
Systems and methods relate to auto-tagging of data in a data lake or a data storage. Generating a statistical summary of the data lake and interactively receiving data in a selected column of an exemplar data addresses an issue of efficiently and accurately auto-tagging data in a data lake. The present disclosure automatically generates a statistical summary of the data lake using a lightweight off-line processing. A graphical user interface interactively receives an exemplar data file with a selection of a column in the exemplar data file. A list of candidate data-tagging patterns is generated based on the statistical summary and updates the list by removing candidate data-tagging patterns that under-generalize the data. The present disclosure determines a data-tagging pattern by selecting a candidate data-tagging profile from the list based on having the least number of matching columns in the data lake.
A computer-implemented method of generating a security language query from a user input query includes receiving, at a computer system, an input security hunting user query indicating a user intention; selecting, using a trained machine learning model and based on the input security hunting query, an example user security hunting query and corresponding example security language query; generating, using the trained machine learning model, query metadata from the input security hunting query; generating a prompt, the prompt comprising: the input security hunting user query; the selected example user security hunting query and the corresponding example security language query; and the generated query metadata; inputting the prompt to a large language model; receiving a security language query from the large language model corresponding to the input security hunting query reflective of the user intention.
A data processing system implements constructing a first prompt including a font mask of a reference character (RC) and a style prompt, sending the first prompt to a text2image model to iteratively generate salient content and concentrate the salient content within the font mask of RC as a first image of RC; concatenating two of the first images as a second image; generating a combined font mask of the font mask and a font mask of a target character (TC); constructing a second prompt including the combined font mask and the second image, sending the second prompt to the model to iteratively generate salient content and in-paint the salient content within a half of the combined font mask as a third image of RC and TC; cropping a styled TC image from the third image using the font mask of TC; providing the styled TC image to a client device.
One example provides a method enacted on a scanning display device comprising an illumination source, and a scanning mirror system including a sense circuit coupled to a scanning mirror. The method comprises operating the scanning mirror system using a drive signal while operating the illumination source to thereby project an image, obtaining a control parameter based at least upon an output of the sense circuit, and comparing the control parameter to a noise threshold condition. The method further comprises, when the control parameter does not meet the noise threshold condition, continuing operating the scanning mirror according to the drive signal. The method also comprises, when the control parameter meets the noise threshold condition, adjusting the drive signal to form an adjusted drive signal to change a trajectory of the scanning mirror.
G09G 3/02 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques par traçage ou balayage d'un faisceau lumineux sur un écran
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
Examples are disclosed that relate to number-theoretic-transform (NTT) architectures and inverse NTT (INTT) architectures for module lattice-based cryptographic algorithms. One example provides a device for performing an NTT for lattice-based cryptographic algorithms. The device comprises a memory block, a read address permutation generator configured to read input values from the memory block, a commutator stage comprising a first commutator layer of commutators and a second commutator layer, a butterfly stage connected to output of the commutator stage, and a write address permutation generator configured to write output values to the memory block.
H04L 9/30 - Clé publique, c.-à-d. l'algorithme de chiffrement étant impossible à inverser par ordinateur et les clés de chiffrement des utilisateurs n'exigeant pas le secret
77.
TRANSPARENTLY SERVICING A HOST COMPUTE LAYER OF A VIRTUAL MACHINE
A method is disclosed for updating a host compute layer (HCL) in a virtual machine (VM) host computer system. The method involves determining the availability of an update for the HCL of a guest VM operating on the VM host system. A message is sent to the HCL to persist the HCL's operating state, including pausing the execution of the guest operating system (OS) and then persisting the operating state. Subsequently, the virtual processor (VP) system associated with the guest VM is stopped, a register is set to a power-on or reset value, and updated HCL code is copied into the memory space of the HCL. The VP system is then resumed, booting the updated HCL, which restores the operating state and resumes the execution of the guest OS.
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
78.
PERFORMANCE EVALUATION OF GENERATIVE QUESTION-ANSWERING SYSTEMS
Systems and methods are disclosed herein for evaluating the performance of a question-answering model. In an example system, a set of prior question-answer pairs is obtained. In an example, each prior question-answer pair comprising a question and an associated answer that was generated previously. Each prior question-answer pair is provided to a LLM to obtain an evaluation score for the prior question-answer pair. In an embodiment, the evaluation score contains a value indicative of a quality of the answer to the question. An evaluation model is trained using features and labels, where the features are based on each prior question-answer pair and the labels are based on the evaluation score for each prior question-answer pair. When a current question-answer pair is obtained (e.g., for evaluation), the evaluation model is applied to the current question-answer pair to generate an evaluation score.
Systems and methods are disclosed herein for compressing a prompt. In an example system, an importance score listing is obtained that includes a score indicative of an importance of a plurality of dataset keywords. From the importance score listing, a keyword importance score is identified for a plurality of keywords in a current text fragment, such as a text fragment to be compressed. A set of placeholders in an abstract prompt template is populated based on the current text fragment. The current text fragment is compressed based on the importance of the plurality of keywords in the current text fragment to generate a compressed text fragment. In an example, the compressed text fragment is included in the prompt for transmission to a computing entity, such as a large language model of a generative question-answering system.
Techniques are described for piecewise generation and transmission of authentication tokens in a collaboration environment. A sender system associated with a user of a collaboration platform intending to send a communication to multiple recipients can request authentication tokens for the recipients from an authorization service. The authorization service can generate and transmit to the sender system a base token, which applies to multiple communication recipients, and a respective authentication delta for each recipient. The base token can be signed or unsigned. Each authentication delta can include recipient-specific data such as a payload claim serving as an immutable identifier of the recipient and a unique signature for the recipient. The sender system can then assemble an authentication token for each recipient by combining the base token with the appropriate authentication delta. Optionally, transformation delta(s) can be used to alter the authentication tokens on a per-user basis.
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
A computer-implemented method, computer program product and computing system for: enabling a generative AI system to effectuate an analysis protocol; enabling a user to utilize the generative AI system during an interactive session concerning the analysis protocol; receiving content during the interactive session, thus defining received content; and determining whether the received content is relevant with respect to the analysis protocol.
A computer-implemented method, computer program product and computing system for: enabling a generative AI system to effectuate an analysis protocol; defining a visual representation of the analysis protocol; associating various portions of the analysis protocol with various portions of the visual representation; enabling a user to utilize the generative AI system during an interactive session concerning the analysis protocol; and identifying an associated portion of the visual representation based, at least in part, upon a portion of the analysis protocol that is the current topic of the interactive session.
A method, computer program product, and computing system for reducing communication session downtime includes transmitting first signals at a predetermined time interval of X seconds; upon initiation of a reboot operation of a first device associated with the processor, decreasing the predetermined time interval from X seconds to Y seconds; transmitting the first signals at the time interval of Y seconds; and once the reboot of the driver is complete, increasing the predetermined time interval from Y seconds to X seconds.
This disclosure describes an expert system that can be used to automatically understand the function of a binary. The expert system includes a large language model (LLM) to determine investigatory steps that are implemented by a suite of tools. One application is malware detection. The expert system uses the tools to gather data and manipulate the binary to gain greater understanding of its function. Data generated during the investigation can be stored and retrieved from a memory representation system. This involves the LLM designing an investigation plan based on both default choices and responses to the data gathered using the tools. The expert system can adjust the plan after each step. Translators use expert knowledge and understanding of tool functions to convert tool outputs into natural language prompts that can be meaningfully understood by the LLM and to convert natural language output by the LLM into calls to the tools.
Generally discussed herein are devices, systems, and methods for resource retrieval. A method may include determining that a calendar event is scheduled to occur in a specified period of time, responsive to the determination, extracting content of a calendar event on a calendar of the messaging interface, generating a list of resources accessible by the user and related to the extracted content of the calendar event, ranking the resources by a comparison of the extracted content of the calendar event and the content of resources of the list of resources, and causing respective summaries of a specified number of the respective resources with higher respective ranks to be output on the display.
G06Q 10/1093 - Ordonnancement basé sur un agenda pour des personnes ou des groupes
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p. ex. des menus
G06F 9/48 - Lancement de programmes Commutation de programmes, p. ex. par interruption
G06F 9/50 - Allocation de ressources, p. ex. de l'unité centrale de traitement [UCT]
G06F 16/2457 - Traitement des requêtes avec adaptation aux besoins de l’utilisateur
G06F 30/27 - Optimisation, vérification ou simulation de l’objet conçu utilisant l’apprentissage automatique, p. ex. l’intelligence artificielle, les réseaux neuronaux, les machines à support de vecteur [MSV] ou l’apprentissage d’un modèle
G06Q 10/06 - Ressources, gestion de tâches, des ressources humaines ou de projetsPlanification d’entreprise ou d’organisationModélisation d’entreprise ou d’organisation
G06Q 10/0631 - Planification, affectation, distribution ou ordonnancement de ressources d’entreprises ou d’organisations
This document relates to relational databases and corresponding data tables. Non-conforming data tables can be automatically transformed into conforming relational data tables. One example can obtain conforming relational data tables and can generate training data without human labelling by identifying a transformational operator that will transform an individual conforming relational data table to a non-conforming data table and an inverse transformational operator that will transform the non-conforming data table back to the individual conforming relational data table. The example can train a model with the training data. The trained model can synthesize programs to transform other non-conforming data tables to conforming relational data tables.
Some embodiments of an interception-based unpacker leverage an auto-unpacker of a packed file, using certain hooks, to obtain unpacked content even when the specific compression and encryption algorithms that were used to pack the packed file are unknown. The unpacked content is studied directly, or injected into a copy of the packed file to create an unpacked executable version of the packed file. A hook on a process loader is utilized to obtain a pre-execution map of memory allocated to a target packed process. One or more interrupt hooks or system call hooks, which are triggered by permission changes or by write permission or execution permission exceptions, are utilized to obtain copies of unpacked content. In some embodiments, the interception-based unpacker executes primarily or entirely in kernel space. Embodiments of the interception-based unpacker are operable in open source kernel or closed source kernel operating systems.
Example aspects include techniques for provisioning downstream access to requested data within a data lake with cell-level granularity. These techniques include receiving a request for downstream access to filtered data from a data lake, generating a logical view to the data lake based on the request, the logical view restricted to the filtered data, and generating a temporary storage location for storing retrieved data received from the data lake via the logical view. The techniques also include assigning a compute cluster to the logical view, and accessing, via the logical view, by the compute cluster, the filtered data including storing the filtered data within the temporary storage location.
A device is configured to communicate on a mobile communications network. An incoming call is received, and it is determined that the incoming call meets a predetermined criteria indicating a probable source of the incoming call. On a display of the device, an option is rendered for answering the incoming call with a generated voice response in lieu of a voice of a user of the device. Text options for generating a voice response are also rendered. The incoming call is answered and generated speech corresponding to the selected text option is sent.
H04M 3/436 - Dispositions pour intercepter des appels entrants
G10L 13/047 - Architecture des synthétiseurs de parole
G10L 15/22 - Procédures utilisées pendant le processus de reconnaissance de la parole, p. ex. dialogue homme-machine
H04M 1/72436 - Interfaces utilisateur spécialement adaptées aux téléphones sans fil ou mobiles avec des moyens de soutien local des applications accroissant la fonctionnalité avec des moyens interactifs de gestion interne des messages pour la messagerie textuelle, p. ex. services de messagerie courte [SMS] ou courriels
H04M 1/72469 - Interfaces utilisateur spécialement adaptées aux téléphones sans fil ou mobiles pour faire fonctionner le dispositif en sélectionnant des fonctions à partir de plusieurs éléments affichés, p. ex. des menus ou des icônes
90.
DETECTION AND REMOVAL OF PREDEFINED SENSITIVE INFORMATION TYPES FROM ELECTRONIC DOCUMENTS
Automated and semi-automated document redaction technology is disclosed herein. In certain example embodiments, ‘context-aware’ redaction is provided. Automated techniques are used to identify a set of potentially sensitive item(s) within a document. The potentially sensitive item(s) are filtered based on contextual information, such an entity identifier (e.g. person identifier, person group identifier identifying a group of multiple people, organization identifier etc.), resulting in a filtered set of redaction candidate(s). The filtered redaction candidate(s) may, for example, be redacted from the document automatically, or outputted as suggestions in an assisted redaction tool, e.g. via a document redaction graphical user interface. Other example embodiments consider selective redaction when uploading and/or downloading documents via a proxy server, to prevent intended or unintended release of potentially sensitive information, e.g. in a web browsing context.
A time series of data for each API exposed by a service is generated to identify when a peak load occurs over time. The peak load for each API is then predicted using the time series data. The APIs exposed by the service are then prioritized based on the predicted peak load. A performance test load and duration are then calculated for a set of the APIs, the set being chosen based on priority, and a performance test plan is automatically generated identifying the APIs to be tested, the performance tests to be applied, the load types and durations to be applied during the performance tests. The performance test plan can be automatically generated for APIs having different traffic pattern trends.
Techniques are described herein that are capable of triggering a security action based on an AI-generated recommendation of a code package. An AI model is caused to recommend an identified code package to resolve a coding problem by providing an AI prompt to the AI model. The AI prompt requests identification of a code package that is written in a programming language and that comprises a designated functionality that resolves the coding problem. A response to the AI prompt is received from the AI model. The response identifies the identified code package. Based at least on confirmation of non-existence of the identified code package or absence of publication of the identified code package in a verified code repository or a value of an attribute of the identified code package satisfying a criterion associated with non-trustworthiness, automatic execution of a security action with regard to the identified code package is triggered.
G06F 21/57 - Certification ou préservation de plates-formes informatiques fiables, p. ex. démarrages ou arrêts sécurisés, suivis de version, contrôles de logiciel système, mises à jour sécurisées ou évaluation de vulnérabilité
Flows of a communication session in a software defined network (SDN) are efficiently managed. A network virtual appliance offloads, to a hardware-based network interface device, processing of data packets of a flow in accordance with packet processing rules associated with the flow. After the offload, subsequent data packets for the offloaded flow are processed and forwarded by the hardware-based network interface device without forwarding to the network virtual appliance.
Examples are disclosed that relate to number-theoretic-transform (NTT) architectures and inverse NTT (INTT) architectures for module lattice-based cryptographic algorithms. One example provides a device (300) for performing an NTT for lattice-based cryptographic algorithms. The device comprises a memory block (302), a read address permutation generator (303) configured to read input values from the memory block, a commutator stage (304) comprising a first commutator layer (306) and a second commutator layer (308), a butterfly stage (310) connected to output of the commutator stage, and a write address permutation generator (301) configured to write output values to the memory block.
H04L 9/30 - Clé publique, c.-à-d. l'algorithme de chiffrement étant impossible à inverser par ordinateur et les clés de chiffrement des utilisateurs n'exigeant pas le secret
95.
TRANSPARENTLY SERVICING A HOST COMPUTE LAYER OF A VIRTUAL MACHINE
A method is disclosed for updating a host compute layer (HCL) in a virtual machine (VM) host computer system. The method involves determining the availability of an update for the HCL of a guest VM operating on the VM host system. A message is sent to the HCL to persist the HCL's operating state, including pausing the execution of the guest operating system (OS) and then persisting the operating state. Subsequently, the virtual processor (VP) system associated with the guest VM is stopped, a register is set to a power-on or reset value, and updated HCL code is copied into the memory space of the HCL. The VP system is then resumed, booting the updated HCL, which restores the operating state and resumes the execution of the guest OS.
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
Technology is disclosed herein for personalized writing assistance via an LLM integration in a software application. In an implementation, a computing device identifies user-specific preferences for content creation. The computing device submits a prompt to an LLM that includes selected content associated with a user and the user-specific preferences, along with a request for the LLM to suggest an intent to modify the selected content in view of the user-specific preferences. The computing device receives a reply from the LLM including the intent to modify the selected content. When the user accepts the suggestion, the computing device generates and submits a second prompt to the LLM including a request for the LLM to modify the selected content according to the intent. The computing device receives a reply from the LLM including a modified version of the selected content and displays the modified version of the selected content.
A data processing system implements after determining that a test in a target branch of a code repository starts failing, receiving a pull request (PR) onto the target branch and blocking the PR (invoking a build policy and a test policy); after determining that the failing test in the target branch is disabled, tracking the disabled test in a disabled test file; in response to a request to requeue the test policy, determining whether artifact in the build policy is identical with artifact in the test policy; when not identical, fetching all versions of the file within a time period, generating a union content of all versions of the file, and applying the requeued test policy using the union content, the time period being between a start time of the build policy and a start time of the requeued test policy; and unblocking the PR.
A data processing system implements after determining that a test in a target branch of a code repository starts failing, receiving a pull request (PR) onto the target branch and blocking the PR (invoking a build policy and a test policy); after determining that the failing test in the target branch is disabled, tracking the disabled test in a disabled test file; in response to a request to requeue the test policy, determining whether artifact in the build policy is identical with artifact in the test policy; when not identical, fetching all versions of the file within a time period, generating a union content of all versions of the file, and applying the requeued test policy using the union content, the time period being between a start time of the build policy and a start time of the requeued test policy; and unblocking the PR.
Technology is disclosed herein for personalized writing assistance via an LLM integration in a software application. In an implementation, a computing device identifies user-specific preferences for content creation. The computing device submits a prompt to an LLM that includes selected content associated with a user and the user-specific preferences, along with a request for the LLM to suggest an intent to modify the selected content in view of the user-specific preferences. The computing device receives a reply from the LLM including the intent to modify the selected content. When the user accepts the suggestion, the computing device generates and submits a second prompt to the LLM including a request for the LLM to modify the selected content according to the intent. The computing device receives a reply from the LLM including a modified version of the selected content and displays the modified version of the selected content.
Systems, methods, devices, and computer readable storage media described herein provide techniques for prioritizing software development using a trained model. In an aspect, model features are determined based on analysis of user behavior with respect to a software application. A software development prioritization (SDP) system determines data associated with the model features and utilizes a generative artificial intelligence (AI) model to summarize the model features based on the determined data. The SDP system determines, based on the summaries, a similarity between software development items and the model features and prioritizes one of the software development items over another based on the determined similarities. In a further embodiment, the SDP system causes a software development task corresponding to the prioritized software development item to be performed before another software development task corresponding to a different software development item. In an aspect, model features are determined utilizing a trained machine learning model.