Users create, view, and interact with massive amounts of content every day, including browsing websites, engaging with social platforms, collaborating electronically with friends and colleagues, transacting with apps and/or plugins, creating and editing documents, and the like. Traditionally, the content associated with various user interactions is siloed based on the entity with which the user interacts. Accordingly, to achieve a goal, it is up to the user to compile content across various sources, identify and organize tasks and subtasks, and track progress and completion of the goal. The present application determines a goal for a user based on monitoring user interactions. A data structure is created for the goal, including determining applicable tasks and subtasks. The data structure becomes a living entity for storing and tracking the goal by continuing to monitor user interactions and determine the user's progress towards completion of each task and, ultimately, the goal.
G06Q 10/0637 - Strategic management or analysis, e.g. setting a goal or target of an organisationPlanning actions based on goalsAnalysis or evaluation of effectiveness of goals
2.
SYSTEMS AND METHODS FOR SENSOR-AGNOSTIC REPRESENTATION OF HUMAN PRESENCE INFORMATION
Systems and methods for sensor-agnostic representation of human presence information are described. An operating system of a computing device with a display screen is configured to receive, from a sensor system, human presence information representing the position and posture of one or more persons detected by a sensor of the sensor system, where the human presence information is determined based on a coordinate system associated with the display screen. The human presence information has the same format regardless of the sensor technology. The human presence information includes an elevation angle, an azimuth angle, a face pitch, a face roll, and/or a face yaw of the person relative to the sensor and/or display screen. The operating system may use the human presence information to implement privacy-related features and/or may provide the human presence information to one or more applications via an API.
A set of incident records are received for a computing system. The incident records are analyzed to identify similar incident records which are then linked. Incident clusters are generated based upon the links and incident records in each cluster are ranked. A prompt is generated to an artificial intelligence (AI) model based on the ranked, related incidents and the AI model returns a response that identifies a root cause and mitigation steps corresponding to the ranked incidents.
This document relates to providing meaningful information relating to a dataset. One example can obtain aggregated summaries and a related knowledge graph. The example can enable local, community, and global retrieval augmented generation utilizing the aggregated summaries and the knowledge graph.
The present disclosure provides methods, systems and storage media for conducting a security review of a system. Certain examples relate to the use of trained generative AI to generating a root security query using a machine learning (ML) generator, based on a system description. A security requirement associated with the root security query is extracted, and an indication of the root security query is output at a user interface. A user input is received in response, and the ML generator generates a follow-up request that is output via the user interface. A second user input is received in response to the follow-up request, and the ML generator then determines that the security requirement is not satisfied by the target system.
Power draw stabilization is provided. A target power consumption of a source load is determined. The source load is generated by electronics supplied power by a primary power source through a power rail. The power rail is coupled to a capacitor bank by a bi-directional converter configured to smooth fluctuations in power drawn from the primary power source by performing mode switch operations. The mode switch operations include, in response to the source load exceeding a target power consumption, controllably switching to a second directional mode that directs current released from the capacitor bank to the power rail. The mode switch operations further include, in response to the source load dropping below the target power consumption, controllably switching the operational mode of the bi-directional converter to a first directional mode to direct current from the power rail into the capacitor bank.
H02J 3/32 - Arrangements for balancing the load in a network by storage of energy using batteries with converting means
H02J 9/06 - Circuit arrangements for emergency or stand-by power supply, e.g. for emergency lighting in which the distribution system is disconnected from the normal source and connected to a standby source with automatic change-over
7.
PERSONALIZED STYLIZING LARGE LANGUAGE MODEL WRITING ASSISTANT
Personally-stylized content can be generated without fine tuning a model. A personally-stylized content generation method can include receiving a first request for first content to be stylized in a style of written prose previously produced by a user, applying a previously trained retriever model to the first request to obtain second content previously produced by the user resulting in obtained content, populating a prompt with the obtained content and the first request resulting in an augmented prompt, providing the augmented prompt to a large language model (LLM), receiving personally-stylized content from the LLM, the personally-stylized content including elements of the style of the written prose of the user, and providing the personally-stylized content to the user.
In an example embodiment, an embedding model is used to generate an embedding of a natural language searching goal specified by a user, the embedding representing user intent of the user. Playbooks in a database of playbooks are also run through the embedding model to generate an embedding for each playbook indicative of a meaning of each playbook. A semantic relationship score can then be computed for each combination of the natural language search goal and a playbook, using the embeddings. These semantic relationship scores can then be passed into a ranking machine learning model, along with measured success rates for the playbooks, to generate a ranking of the playbooks. Based on this ranking, a set of filters and action corresponding to at least one of the playbooks may then be recommended to the user.
Innovations in encoder-side options for intra block copy (“BC”) prediction mode facilitate intra BC prediction that is more effective in terms of rate-distortion performance and/or computational efficiency of encoding. For example, some of the innovations relate to concurrently performing block vector (“BV”) estimation and making block splitting decisions for a block. Other innovations relate to selectively merging blocks into a larger block during BV estimation.
H04N 19/159 - Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
H04N 19/11 - Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
H04N 19/172 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/57 - Motion estimation characterised by a search window with variable size or shape
H04N 19/593 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
Circuitry configured to determine whether a backup battery is capable of providing power to power a load are provided. A circuit can include a backup battery, a sensor electrically coupled to generate condition data indicative of a condition in an environment of the backup battery, a first current sense device electrically coupled to generate first current data indicative of an amount of current provided to load circuitry, a backup battery controller coupled to the backup battery, sensor, and the first current sense device, the backup battery controller configured to determine based on the condition data and the first current data whether the backup battery is available to provide power to the load circuitry, and provide an electrical signal indicative of whether the backup battery is available to provide the power to the load circuitry.
G05B 13/02 - Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
H02J 7/00 - Circuit arrangements for charging or depolarising batteries or for supplying loads from batteries
Systems and methods are provided for determining whether a user has deferred one or more emails. More specifically, a system and method may determine whether an email is likely to have been deferred by a user, perform at least one action on the email determined likely to have been deferred, determine a mode for providing an indication to the user to follow-up with the email determined likely to have been deferred, and cause an indication specific to the email determined likely to have been deferred to be provided to the user. In some instances, the notifications are based on a device associated with the user and/or may be included in at least one of a task management application and/or a calendar application.
Systems and methods are disclosed herein for providing a shared ambiance in a virtual meeting. In some instances, systems receive audio signals from a set of audio inputs corresponding to a plurality of participants in a virtual meeting. Each audio signal includes a voice component and a background noise component. Systems isolate the background noise component from the voice component for each received audio signal and then determine an ambiance score for each isolated background noise component. Based on the determined ambiance scores, systems select a particular background noise component from the isolated background noise components and transmit the particular background noise component to a set of audio outputs corresponding to the plurality of participants in order to provide a shared ambiance for the plurality of participants in the virtual meeting.
Examples of the present disclosure describe systems and methods for implementing a success rate SLI solution for service pipelines. In aspects, a metrics service may detect that a number of payloads relating to one or more activities or service requests have been received at one or more services of a service pipeline. For each payload processed by a service of the service pipeline, the metrics service may determine a set of payload processing metrics for the service. The set of payload processing metrics for the service may be applied to the payload. The payload processing metrics for each service may be aggregated and used to calculate a success rate for payloads processed using the service pipeline. Based on the success rate, an SLI may be evaluated and/or an action associated with the activity/service request may be performed.
H04L 41/5009 - Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]
H04L 41/0654 - Management of faults, events, alarms or notifications using network fault recovery
H04L 43/04 - Processing captured monitoring data, e.g. for logfile generation
14.
ERROR CORRECTION USING ON-DIE PARITY BIT STORAGE AND TRANSITIONAL SIGNALS
The described technology provides a multi-level error correction method, including encoding data received from a double data rate (DDR) memory by performing primary coding to generate transitional symbols, wherein the primary coding comprising at least one of cyclical redundancy check (CRC) encoding and single error correction double error detection (SECDED) encoding, performing a secondary coding on the transitional symbols to generate inner codes, the inner codes comprising code 1 parities generated from the transitional symbols and code 2 parities generated from the transitional symbols and metadata stored on the DDR memory, wherein the secondary coding comprising Reed Solomon (RS) encoding, and saving the inner codes on parity bit storage locations on a die of the DDR memory.
A disclosed method facilitates AI-generation of a customized email per a methodology that significantly reduces the risk of the customized email including hallucinated facts or undesirable personal identity information (PII). The method includes identifying an email template and a recipient identifier that identifies a recipient of the customized email based on user inputs to an email application; mining contextual data stored in association with the recipient identifier; generating a large language model (LLM) prompt based on the email template and the contextual data; providing the LLM prompt as input to a trained large language model (LLM); receiving the customized email as an output from the LLM; and returning the customized email to the email application for display within a user interface.
The description relates to hinged devices. One example can include a spine defining inner and outer arced surfaces and a hinge arm positioned between the inner and outer arced surfaces and configured to arcuately move from a retracted position to a fully extended position. The example includes an anti-lock link captured between the spine and the hinge arm to move along an arc. Rotation of the hinge arm to the fully extended position is configured to rotate the anti-lock link along the arc and partially out of the spine until the anti-lock link contacts the spine and blocks further extension of the hinge arm. Reverse rotation of the hinge arm from the fully extended position toward the retracted position is configured to initially cause the hinge arm to contact the anti-lock link and not the outer arced surface.
A method and system for providing access to virtual desktops may include receiving an input indicating hovering of a pointer over an icon in a toolbar, identifying one or more existing virtual desktops, determining a state for each of the one or more existing virtual desktops by identifying one or more instances of any applications that are currently running in each of the one or more existing virtual desktops and determining a running state for each of the one or more instances, and displaying a preview of each of the one or more existing virtual desktops in response to the hovering of the pointer over the icon. The preview may include displaying the running state for one of the one or more instances for each existing virtual desktop.
Systems, methods, and software are disclosed herein for an iterative process of visualization generation. In an implementation, a computing device receives a user request to generate a visualization. The computing device submits a first prompt to a foundation model to obtain code for generating an instance of the visualization requested by the user. The computing device generates an instance of the visualization using the code. The computing device submits the instance of the visualization to an image model to obtain a description of the instance and submits a second prompt to the foundation model to obtain an evaluation of the instance of the visualization with respect to the visualization requested by the user, including the description produced by the image model.
Systems, methods, apparatuses, and computer program products are disclosed for employing a hybrid boot to reimage a target device using a mobile device. A mobile device provides, to a target device, a boot file configured to execute an intermediate operating system. The mobile device performs a user presence check to determine whether the target device is in proximity to the mobile device. Responsive to determining that the target device is in proximity to the mobile device, the mobile device provides, to the intermediate operating system on the target device, transfer information associated with at least a first restricted-access portion of a customized system image to cause the intermediate operating system to obtain the first restricted-access portion of the customized system image and reimage the target device based at least on the first restricted-access portion of the customized system image.
The present disclosure relates to a vector processor implemented on programmable hardware (e.g., a field programmable gate array (FPGA) device). The vector processor includes a plurality of vector processor lanes, where each vector processor lane includes a vector register file with a plurality of register file banks and a plurality of execution units. Implementations described herein include features for optimizing resource availability on programmable hardware units and enabling superscalar execution when coupled with a temporal single-instruction multiple data (SIMD).
Systems and methods for spatial-textual clustering-based recognition of text in videos are disclosed. A method includes performing textual clustering on a first subset of a set of predictions that correspond to numeric characters only and performing spatial-textual clustering on a second subset of the set of predictions that correspond to alphabetical characters only. The method includes, for each cluster of predictions associated with the first subset of the set of predictions, choosing a first cluster representative to correct any errors in each cluster of predictions associated with the first subset of the set of predictions and outputting any recognized numeric characters. The method includes, for each cluster of predictions associated with the second subset of the set of predictions, choosing a second cluster representative to correct any errors in each cluster of predictions associated with the second subset of the set of predictions and outputting any recognized alphabetical characters.
G06F 16/783 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
G06V 20/40 - ScenesScene-specific elements in video content
Systems and methods for providing status indicators for various forms of user activity that occurs across different digital contexts of a collaboration platform. A system can monitor activity that a particular user is performing within a particular digital context and provide status indicators to a different user within a different digital context when the monitored activity meets one or more criteria. For example, a system may cause a status indicator to be displayed in association with a data object within the digital context of a message thread when a specific type of user activity is occurring with respect to that data object within the digital context of an application that facilitates editing of the content of the data object. Thus, a system can deliver timely and contextually relevant status indicators about how team members are currently interacting with a data object without users having to switch between digital contexts.
Disclosed solutions perform image compression using a variational autoencoder that enables greater compression than traditional methods, while simultaneously maintaining superior fidelity for the decompressed image. Examples persist the bottleneck layer output of a variational autoencoder as a compressed image in the form of a latent tensor. The latent tensor is decompressed by a variational autodecoder into a recovered image in pixel space. In some examples, different encoder/decoder pairs are trained on specific image types, based on feature attributes. For example, maps have lines that are narrow compared to their length (e.g., have a high aspect ratio) which are different than features within photographs of people and scenes. Some examples leverage contrastive language-image pre-training (CLIP) and/or bootstrapping language-image pre-training (BLIP) models to store embeddings, each associated with a compressed image, to enable natural language searches of compressed image collections without requiring decompression.
A device includes a processor, and a memory storing executable instructions which, when executed by the processor, cause the processor alone or in combination with other processors to perform the following functions: receive textual user input from a user describing a design to be generated; implement a first prompt generator to generate a first prompt for a Large Language Model (LLM) to restructure the user input; and implement a second prompt generator to generate a second prompt for a text-to-image model using output of the LLM to produce, the second prompt to prompt the text-to-image model to produce a proposed design based on the user input. The proposed design is provided to the user via an application comprising controls for further editing the proposed design.
A computing system including a processor configured to receive an indication of one or more dead data qubits and one or more dead auxiliary qubits among qubits included in a quantum computing device. The qubits are arranged in a lattice that includes plaquettes. Each of the plaquettes includes data qubits and auxiliary qubits. The processor is further configured to compute a reduced lattice by, for each of the plaquettes that includes at least one dead data qubit, computing a respective first reduced plaquette that omits the dead data qubit. For each of the plaquettes that includes at least one dead auxiliary qubit, the processor is further configured to compute the reduced lattice at least in part by computing a respective second reduced plaquette that omits the dead auxiliary qubit. The processor is further configured to output instructions to implement an error correction code on the reduced lattice.
A method for enacting a measurement circuit of a surface code on a plaquette of qubits of a qubit lattice comprises: (a) distributing among a sequence of time steps a set of one-qubit projective measurements on each of three auxiliary qubits of the plaquette; (b) distributing among the sequence of time steps a set of two-qubit projective measurements on each of four data qubits of the plaquette together with one of the three auxiliary qubits; (c) distributing among the sequence of time steps a set of two-qubit projective measurements on two or more auxiliary-qubit pairs selected from the three auxiliary qubits of the plaquette; and (d) advancing through each of the time steps of the sequence, executing the one- and two-qubit projective measurements distributed therein. In this method the measurement circuit corresponds to a stabilizer of the surface code, and the measurements generate measurement of a stabilizer operator.
A method for implementing a measurement circuit of a surface code on a plaquette of qubits of a Majorana-tetron lattice comprises: (a) distributing among a sequence of time steps a set of one-qubit projective-measurement loops on each of three auxiliary qubits of the plaquette; (b) distributing among the sequence of time steps a set of two-qubit projective-measurement loops on each of four data qubit of the plaquette together with one of the three auxiliary qubits; (c) distributing among the sequence of time steps a set of two-qubit projective measurement loops on two or more auxiliary-qubit pairs selected from the three auxiliary qubits of the plaquette; and (d) advancing through each of the time steps of the sequence, executing the one- and two-qubit projective measurements distributed therein. In this method the measurement circuit corresponds to a stabilizer of the surface code, and the measurements generate measurement of a stabilizer operator.
A computing system is provided, including processing circuitry configured to cause an interaction interface for a trained generative model to be presented, in which the interaction interface is configured to communicate a portion of a user interaction history. The processing circuitry is further configured to receive, via the interaction interface, an input for the trained generative model to generate an output. The processing circuitry is further configured to send a command to create, via the trained generative model or another trained generative model, a whiteboard based on the user interaction history and receive the created whiteboard. The processing circuitry is further configured to generate a prompt based on the whiteboard and the instruction from the user and provide the prompt to the trained generative model. The processing circuitry is further configured to receive a response from the trained generative model and output the response via the interaction interface.
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
Systems and methods for describing a composition of an article of manufacture are disclosed. In one aspect, a method includes receiving article composition data for an article of manufacture that identifies a set of parts of the article, a stated composition for each part of the set of parts, and a physical quantity of the stated composition. The method further includes classifying the stated composition of each part of the set of parts into a normalized composition that includes a set of normalized chemicals. The method further includes outputting an aggregated physical quantity of each normalized chemical for the set of parts of the article. The method can include classifying a normalized composition of each part into a material category within a hierarchical taxonomy based on the set of normalized chemicals of that normalized composition and outputting an aggregated physical quantity of each material category for the parts.
The present application relates to messaging between instances of a microservice in a decentralized architecture. A computer device hosting an instance may include a memory storing instructions to operate a microservice and a processor. The instance receives a message for the microservice from another instance of the microservice, the message including a branch of a hash tree with at least a block for a root hash of a central node, one or more blocks for intermediate nodes, and a leaf block including message content. The instance places at least the leaf block into a local hash tree based on the branch of the hash tree. The instance verifies an integrity and an order of the message based on the root hash and the location of the leaf block in the local hash tree. The instance acts on the message content in response to verifying the integrity and the order.
A system is provided comprising a display, a first processor, a second processor, an image sensor, and an ambient light sensor. On condition that the image sensor is not in use by an application, image sensor data is blocked from the first processor and routed to the second processor, to thereby enable the second processor to execute a color adjustment algorithm configured to use at least the image sensor data and ambient light data to adjust one or more color parameters of content displayed on the display, and to execute a brightness adjustment algorithm configured to use at least the image sensor data and the ambient light data to adjust a luminance of the display.
G06T 7/174 - SegmentationEdge detection involving the use of two or more images
H04N 9/78 - Circuits for processing the brightness signal and the chrominance signal relative to each other, e.g. adjusting the phase of the brightness signal relative to the colour signal, correcting differential gain or differential phase for separating the brightness signal or the chrominance signal from the colour television signal, e.g. using comb filter
32.
METHOD AND SYSTEM FOR RESOURCE GOVERNANCE IN A MULTI-TENANT SYSTEM
Example aspects include techniques for implementing resource governance in multi-tenant environment. These techniques may include receiving a service request for a multi-tenant service from a client device, and predicting a resource utilization value (RUV) resulting from execution of the service request based on text of the service request, an amount of data associated with the client device at the multi-tenant service, and/or a temporal execution value. In addition, the techniques may include determining that the RUV is greater than a preconfigured threshold identifying an expensive request, and applying a load balancing strategy to the service request based on the RUV being greater than the preconfigured threshold.
H04L 47/125 - Avoiding congestionRecovering from congestion by balancing the load, e.g. traffic engineering
H04L 47/762 - Admission controlResource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions triggered by the network
High availability network services are provided in a communications network comprising a plurality of network devices including a network function implemented as two instances configured as an active instance and a backup instance. The backup instance maintains state data such that the backup instance can actively provide services in response to a failure of the active instance. A pool of data forwarding functions sends, over a tunnel connection, ingress data packets to the network function based on a MAC address of the active instance on an overlay network. When the active instance has failed, the backup instance provides the network function and the pool of data forwarding functions sends over the tunnel connection, subsequent ingress data packets to the network function based on an overlay network MAC address of the backup instance.
An application programming interface (API) proxy intercepts API calls and responses for an application under test in a development environment, simulating (e.g., mocking) rate limiting and throttling behavior, which is otherwise challenging to test. The API proxy receives a API call and, based on a resource limiting parameter (e.g., rate-limiting or otherwise throttling), determines that the API call should be forwarded to the API endpoint. When the API proxy receives another API call from the application, destined for the same API endpoint, the API proxy determines to not forward the second API call, based on the resource limiting parameter (e.g., too soon after the first API call, or requests too much of a computational burden, such as exceeding a resource quota). The API proxy instead returns a throttling response, as would be expected from the API endpoint. The API proxy provides guidance messages for both outgoing calls and incoming responses.
A processor includes a first die, a second die connected to the first die with a microfluidic volume positioned between the first die and the second die, a wicking heat spreader positioned in the microfluidic volume; and a boiling enhancement surface feature positioned on at least one surface of the wicking heat spreader.
Methods and systems are provided for improved access to rows of data in a distributed data system. Each data row is associated with a partition. Data rows are distributed in one or more files and an impure file includes data rows associated multiple partitions. A clustering set is generated from a plurality of impure files by selecting a candidate impure file based on file access activity metrics and one or more neighbor impure files. Data rows of the impure files included in the clustering set are sorted according to their respective associated partitions. A set of disjoint partition range files are generated based on the sorted data rows of the impure files included in the clustering set. Each file of the set of disjoint partition range files is transferred to a respective target partition.
G06F 16/13 - File access structures, e.g. distributed indices
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
G06F 16/28 - Databases characterised by their database models, e.g. relational or object models
37.
FEDERATED GRAPH QUERIES ACROSS HETEROGENEOUS DATA STORES
Solutions are disclosed that enable efficient federated graph queries across multiple isolated data stores. Examples leverage the connectedness of the expected data that spans the data stores by defining the entities and relationships and inferring the intent of the queries. These are used to optimize data searches in the individual data stores. Examples map each of two or more variables of the input query to elements of a public schema and use the mapping to determining a storage tag (identifying a data store) for each of the variables of the input query. Store-specific queries are scheduled and performed based on at least the storage tags.
A personalized natural language processing system tokenizes a plurality of sets of raw text data to generate a plurality of sets of tokenized text data for the plurality of users, respectively. The tokenized text data includes a sequence of tokens corresponding to the raw text data, the tokens at least identifying distinct words or portions of words in the raw text. The system appends predetermined user-specific tokens to the sets of tokenized text data from the users, respectively. Each predetermined user-specific token corresponds to one of the users. The system processes the sets of tokenized text data using the NLP model in accordance with the appended predetermined user-specific tokens to predict a personalized classification for the sets of tokenized text data from each of the users, and outputs the personalized classifications of the tokenized text data for each of the users.
A data processing system includes a processor, and a memory storing executable instructions which, when executed by the processor, cause the processor alone or in combination with other processors to perform the following functions: based on a list of design purposes, generate prompts requesting a Large Language Model (LLM) to produce corresponding prompts for input to a text-to-image model to generate a proposed design corresponding to each design purpose; submit the prompts from the LLM to the text-to-image model; receive the proposed designs from the text-to-image model; and increase a design template library by adding a design based on the proposed designs output by the text-to-image model.
Control of network traffic in a network is provided, including classifying a network request from a network source address using request classifiers selected from a plurality of request classifiers based on the network request satisfying classification conditions of the selected request classifiers, associating the network request with each classifier metric corresponding to the selected request classifiers, aggregating the classifier metrics associated with the network request to determine an aggregate request control metric of the network request, and instructing a network traffic controller to operate on the network request based on whether the aggregate request control metric satisfies a request control condition. Each of the plurality of request classifiers is associated in memory with a corresponding classifier metric.
Enforcement of a communication policy at a communication intermediary configured to communicate between a first communicating entity and a second communicating entity is provided. The communication intermediary includes packet routers. The enforcement includes identifying, by the packet routers of the communication intermediary, a secure plaintext label in each network packet of labeled network traffic received at the packet routers, evaluating whether the labeled network traffic satisfies an enforcement condition of the communication policy based on the secure plaintext label, instructing a network controller to operate on the labeled network traffic according to the communication policy, based on the operation of evaluating. Each network packet includes encrypted content configured to be inaccessible by the packet routers. The secure plaintext label is accessible by the packet routers and includes a data encoding of a portion of the encrypted content.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
H04L 45/302 - Route determination based on requested QoS
A system classifies an intent based on a received prompt and identifies system-provided prompts based on the intent. The system inputs the system-provided prompts and the received prompt to a generative artificial intelligence model, wherein the generative artificial intelligence model outputs form items corresponding to the received prompt and the system-provided prompts, the form items including form prompt items and form response items. The system converts the form items into the renderable form presentable in a user interface, wherein the renderable form includes the form prompt items and the form response items.
Example solutions provide an artificial intelligence (AI) agent for pre-build configuration of cloud services in order to enable the initial build of a computational resource (e.g., in a cloud service) to minimize the likelihood of excessive throttling or slack. Examples leverage prior-existing utilization data and project metadata to identify similar use cases. The utilization data includes capacity information and resource consumption information (e.g., throttling and slack) for prior-existing computational resources, and the project metadata includes information for hierarchically categorization, to identify similar resources. A pre-build configuration is generated for the customer's resource, which the customer may tune based upon the customer's preferences for a cost and performance balance point.
Undesirable light leakage is reduced in a mixed-reality head-mounted display device using an out-coupling diffractive optical element in a waveguide combiner that is implemented using a surface relief grating (SRG) having a gradient refractive index. The SRG has gratings with modulated depth in which shallower gratings have a lower refractive index and deeper gratings have a higher refractive index. The lower efficiency of the shallower gratings reduces forward-propagating virtual image light leaking into the real-world environment of the HMD device while simultaneously enabling light to propagate to the deeper gratings to thereby improve virtual image uniformity over the entirety of eyebox of the combiner. The SRG with gradient refractive index is alternatively fabricated using an inkjet deposition process with resin inks having different refractive indexes and subsequent nanoimprint lithography grating imprinting or physical vapor deposition by which a thickness-modulated resin layer is applied to a constant-height grating structure.
Implementations of semantic parsing using pre-trained language models are provided. One aspect includes a computing system for semantic parsing of natural language. The computing system comprises processing circuitry and memory containing instructions that, when executed, cause the processing circuitry to receive a request comprising a natural language utterance and generate a formal meaning representation using the natural language utterance and a language model comprising a semantic parser that has been prompted with training data generated by providing a dataset comprising a set of unlabeled programmatic scripts and a seed programmatic script, generating a set of parsed natural language descriptions by inputting the set of unlabeled programmatic scripts into an inverse semantic parser, generating a set of re-parsed programmatic scripts by inputting the set of parsed natural language descriptions into the semantic parser, and determining a set of labeled programmatic scripts by validating the set of re-parsed programmatic scripts.
The disclosure relates to utilizing a domain insight system for providing plain language descriptions and insights into complex data and/or sparsely populated domains using machine-learning models and large generative models. For instance, the domain insight system converts data outputs from machine-learning models in various output formats into clear, accurate, comprehensible, and straightforward results. The domain insight system achieves this by using one or more dynamic prompts that are tailored based on the data output types and report descriptors, thus improving the accuracy and efficiency of the large generative model. In particular, the domain insight system uses specialized prompts with carefully selected parameters and, in some cases, system-level meta-prompts, to generate accurate domain-based reports and explanations for a given dataset.
A system for establishing network reliability for a computer network includes a plurality of initiating nodes to transmit a plurality of packets across the network and a plurality of receiving nodes to receive the plurality of packets via the network. A portion of the plurality of packets transmitted from the initiating nodes are appended with identifiers that correspond to characteristics of entities using the network. The plurality of receiving nodes transmit acknowledgement receipts associated with packets appended with the identifiers to a network monitoring system that monitors quality of service associated with the characteristics.
Some embodiments engineer a prompt for submission to a language model, such as a software development large language model. Some embodiments ascertain a relationship between code development information and potential context. Code development information includes static analysis results, project settings, development tool history or status data, and other software development data which augments training data previously embedded in the language model. Some embodiments compute a prompt inclusion score of the potential context, based on at least the relationship, and use the inclusion score to determine whether to include the potential context in the language model prompt. In some scenarios, an embodiment determines where to place the context in the prompt. Scoring is performed by a formula, statistical scoring model, or machine learning scoring model. Some embodiments reduce context inclusion false positives and false negatives that were based on the use of embedding similarity scores alone.
A technique is described herein for receiving a selected set of weights and a mask produced by any type of sparsification process by operating on an original set of weights. The mask describes positions of the selected set of weights and a non-selected set of weights among a combined set of weights. For example, the non-selected set of weights represent weights that have been zeroed out in the original set of weights. In an inference stage, a processor directly performs computations on the selected set of weights and the mask, without the preliminary step of reconstituting the non-selected weights in memory. Instead, the processor performs computations that take into account the influence of the non-selected weights. The technique is efficient because it reduces the consumption of memory during the execution of the machine-trained model, and reduces the transactional costs associated with moving weights between memory and processing functionality.
A computing system for preventing at least a portion of an input device from attaching to an improper location on a computing device is disclosed. In one example, the input device comprises first and second input device magnets spaced by a separation distance and having a first magnetic pole orientation. The computing device comprises a housing with a first side and a first end adjacent to the first side, and first and second computing device magnets spaced by the separation distance and having a second magnetic pole orientation opposite to the first magnetic pole orientation. At least one repelling magnet having the first magnetic pole orientation is located between the first end of the housing and the second computing device magnet to repel the second input device magnet of the input device.
G06F 1/16 - Constructional details or arrangements
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
51.
System and method for performing query operations on run length encoded data
A method, computer program product, and computing system for processing query operations on run length encoding (RLE), data in a parallel processing computing system. Data for query execution is received at a parallel processing computing system, at least a portion of the data being compressed according to RLE, thereby forming RLE data; and a query operation is executed on the RLE data without performing a decompression operation on the RLE data.
The techniques disclosed herein provide a synchronization engine that operates in conjunction with a service worker to dynamically store and update a working set of user data and single page application (SPA) resources from a network server to a user device. The working set can be hosted across several domains and identified by association with a user account. Accordingly, the synchronization engine retrieves the working set from the network server to enable offline execution of the single page applications. As such, subsequent requests for interacting with a single page application are then serviced by the synchronization engine using the working set retrieved from the network server. For instance, the service worker can bind user data to the application resources to enable progressive rendering through an application controller using locally available resources. In this way, the disclosed system provides a consistent user experience irrespective of network connectivity.
H04L 67/60 - Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
A method, computer program product, and computing system for optimizing query operations on run length encoding (RLE) data in a parallel processing computing system. Data is received in a plurality of columns of an input table of a parallel processing computing system for query execution; the system determines that at least a portion of the received data in a first number of columns is compressed according to run length encoding (RLE), thereby comprising RLE data columns including RLE data and that the received data in a second number of columns is not compressed according to run length encoding (RLE), thereby comprising non-RLE data columns including non-RLE data. A query operation is executed on the RLE data and the non-RLE data by prioritizing processing of the RLE data columns over processing of the non-RLE data columns.
A supply chain tracking system utilizes tracking codes to track products through a supply chain. A tracking code is assigned to each product. If the product is grouped with other products at a stage in the supply chain, a tracking code is assigned to the group, and the tracking code for each of the products in the group is associated with the tracking code for the group. If the group of products is further aggregated with groups of other products, such as in a shipping container, a tracking code is assigned to the aggregated groups of products, and the tracking code for each of the groups of products is associated with the tracking code for the aggregated groups of products. The tracking codes are used to generate a supply chain graph which maps the travel of each product through the supply chain.
Methods, systems, and computer storage media for providing speech synthesis using a code-mixed speech engine in a speech synthesis system. A code-mixed speech engine supports generating natural and intelligible speech in a target speaker voice—for code-mixed-text of two or more languages—based on a code-mixed speech model that supports both code-mixing and cross-locale voice transfer scenarios. In operation, code-mixed training data associated with a plurality of different languages is accessed. A code-mixed speech model—associated with a training engine and an inference engine that support generating code-mixed synthesized speech—is generated. The code-mixed speech model is deployed. A request being received for synthesized speech of a speech synthesis service. An instance of code-mixed synthesized speech is generated. The instance of code-mixed synthesized speech is generated using the code-mixed speech model. The instance of code-mixed synthesized speech is communicated for output on an interface associated with the speech synthesis service.
G10L 13/08 - Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
G10L 13/10 - Prosody rules derived from textStress or intonation
56.
AI-MODIFIED CODE RECOMMENDATION IN CONTEXT OF A DEVELOPER TOOL
Techniques are described herein that are capable of providing a recommendation of AI-modified code in context of a developer tool. Based at least on code being developed in a developer tool, an interface element is provided in a user interface of the developer tool. The interface element is configured to receive a prompt that specifies a modification to be performed on the code. Based at least on receipt of the prompt, an AI model is automatically caused to perform the modification on at least a snippet of the code to provide a modified snippet. The modified snippet is processed using a language intelligence tool of the developer tool to provide a processed version of the modified snippet. A recommendation to replace the snippet in the code with the modified snippet is provided by causing the processed version of the modified snippet to be displayed via the user interface.
Methods, systems, and computer storage media for providing compute management using a compute management engine in an artificial intelligence (AI) system. A compute management engine supports dynamically switching between two modes of operation for an inference phase of a generative artificial AI model. The compute management engine employs a bypass engine that causes prompt stage operations to be executed without an in-memory compute engine and causes auto-regression stage operations to be executed with the in-memory compute engine. In operation, an inference phase operation is accessed. When the inference phase operation is a prompt stage operation, the inference phase operation is executed without an in-memory compute engine. When the inference phase operation is an auto-regressive stage operation, the inference phase operation is executed with the in-memory compute engine. Memory output is generated for the inference phase operation to cause a processor to output a processor output for the inference phase operation.
A computing device is provided, including processor and a storage device holding instructions that are executable by the processor to implement a base artificial intelligence (AI) model and two or more delta AI models, each delta AI model having lower dimensionality than the base AI model. An inference request including an input prompt is received, the inference request specifying a selected delta AI model of the two or more delta AI models. The input prompt is input to the base AI model to thereby generate a base model result vector. The input prompt is input to the selected delta AI model to thereby generate a delta model result vector. An output vector is generated by combining the base model result vector and the delta model result vector via a combination operation. The output vector is output.
A file system volume's space is internally allocated as a set of containers, with each container corresponding to a different subset of storage clusters. The set of containers includes a first container corresponding to a first subset of storage clusters storing first data that is compressed according to a first set of compression attributes, a second container corresponding to a second subset of storage clusters storing second data that is uncompressed, and a third container corresponding to a third subset of storage clusters that are free. Based on determining that the second data of the second container is to be compressed, compressed second data is created based on a second set of compression attributes. The second set of compression attributes is different than the first set of compression attributes. The compressed second data is written to the third subset of storage clusters of the third container.
Detecting an anomalous configurator of a plurality of managed entities. A plurality of configuration trees, each representing configuration parameters at a corresponding managed entity, include annotations that identify a configurator that configured each parameter. For a particular configurator, a plurality of subtrees is generated from the plurality of configuration trees. A set of weighted edit distances are calculated from the plurality of subtrees, each representing a degree of difference between a different pair of subtrees. A distance matrix is populated with the set of weighted edit distances, and the distance matrix is used to identify anomalous subtree(s) within the plurality of subtrees for the particular configurator. In embodiments, a configuration corresponding to an anomalous subtree considered to have been anomalously applied by the particular configurator. Data that identifies at least one managed entity associated with the anomalous subtree, and the particular configurator is stored.
A media server uses selective just-in-time (“JIT”) transcoding of media such as video. For example, the media server determines a measure of complexity of a given segment of a given media sequence. The given segment has been encoded at a base bit rate. The media server evaluates a complexity condition for the given segment. As part of evaluating the complexity condition, the media server compares the measure of complexity to a complexity threshold. Based at least in part on whether the complexity condition is satisfied, the media server selects between use of preemptive transcoding and use of JIT transcoding for the given segment at a given target bit rate. In this way, the media server can selectively incur the cost of preemptive transcoding operations for the given segment if JIT transcoding would likely introduce unacceptable delay, and the media server can otherwise use JIT transcoding operations for the given segment.
H04N 19/40 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
H04N 19/14 - Coding unit complexity, e.g. amount of activity or edge presence estimation
H04N 19/154 - Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
H04N 19/177 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a group of pictures [GOP]
H04N 19/184 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
62.
ULTRA DENSE PROCESSORS WITH EMBEDDED MICROFLUIDIC COOLING
A processing unit includes a first die and a second die with a microfluidic volume between the first die and the second die. At least one heat transfer structure couples the first die to the second die and is located in the microfluid volume. An electrochemical fluid is positioned in the microfluidic volume to provide electrochemical energy to at least one of the first die and the second die and receive heat from the first die and the second die.
Techniques for enabling a library of local maps to remain de-coupled from a global map are disclosed. An MR system is determined to be located on a platform that is currently moving or that has an ability to readily move. That platform's type is determined. Based on the determined type for the platform, a 3D boundary that approximates a shape for the platform's type is generated. The 3D boundary is imposed on the platform. Scanning data for the platform is acquired. The bounds for that scanning data is at least initially limited to that of the 3D boundary. The scanning data is used to build or supplement a library of local maps. The library is representative of the platform. That library is prevented from being coupled to a global map.
The present disclosure relate to highlighting audience members with reactions to a presenter of an online meeting. Unlike physical, fact-to-face meeting that enables spontaneous interactions among the presenter and the audiences that are collocated with the presenter, presenting materials during an online meeting raises an issue of the present not being able to see real-time reactions or feedback by the audience members. The present disclosure addresses the issue by dynamically determining one or more audience members who indicate reactions during the online meeting or presentation and displaying faces of the one or more audience members under spotlight to the presenter. The presenter sees faces of the audience members with reactions during the online presentation and responds to the audience members and keep the audience engaged. The spotlight audience server analyzes video frames and determines types of reactions of the audience members.
G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
65.
CACHE SERVICE FOR PROVIDING ACCESS TO SECRETS IN CONTAINERIZED CLOUD-COMPUTING ENVIRONMENT
A cache service provides applications in a containerized, multi-tenant cloud-computing system low-latency access to secrets. The cache service may operate as a cluster-level service or a sidecar service. The cache service may store copies of secrets (which are located in one or more absolute stores) in a cache storage. The cache service and the cache storage may be closer to the applications than the one or more absolute stores are to the applications. The cache service may aggregate secrets associated with multiple entities in a single cache storage. The cache service may support isolation between secrets such that secrets of a first entity are isolated from secrets of a second entity. The cache service may enforce granulated access controls such that it can apply different access controls to secrets of a first entity than to secrets of a second entity.
G06F 12/128 - Replacement control using replacement algorithms adapted to multidimensional cache systems, e.g. set-associative, multicache, multiset or multilevel
66.
CHARACTERIZING AND FORECASTING EVOLVING QUERY WORKLOADS
Systems and methods for characterizing and forecasting evolving query workloads. The method includes receiving a query, the received query including a parameter value and an arrival time; identifying the query as a recurrent query; extracting a query template from the received query by parsing the received query; based at least on the identifying, generating a feature vector for the received query, the feature vector generated based on the extracted template and the parameter value; and forecasting a future query based on the generated feature vector by applying a neural network.
This disclosure provides electrochemically-cleavable linkers with cleavage potentials that are less than the redox potential of the solvent in which the linkers are used. In some applications, the solvent may be water or an aqueous buffer solution. The linkers may be used to link a nucleotide to a bound group. The linkers include a cleavable group which may be one of a methoxybenzyl alcohol, an ester, a propargyl thioether, or a trichloroethyl ether. The linkers may be cleaved in solvent by generating an electrode potential that is less than the redox potential of the solvent. In some implementations, an electrode array may be used to generate localized electrode potentials which selectively cleave linkers bound to the activated electrode. Uses for the linkers include attachment of blocking groups to nucleotides in enzymatic oligonucleotide synthesis.
Solutions for evaluating source code generators use offline and online evaluation stages. Offline evaluation includes separating each of a plurality of input passages of software code into a plurality of constituent blocks. Each code generator (of a plurality of code generators) generates an equivalent block corresponding to each constituent block. A coding score is determined for each equivalent block (for each code generator), and the coding scores are aggregated across the equivalent blocks to provide an aggregate score for each code generator. A ranking of the aggregate scores is used to down-select to a fewer number of code generators for online evaluation. For this stage, the code generators output passages of software code, and user acceptance of the code generators' outputs may be used for further ranking and down-selection. Some examples weight the coding score according to a code utility estimate of the constituent blocks for which equivalent blocks are generated.
The present disclosure relates to systems and methods that add an outer product engine and an accumulator array to implement Advanced Reduced Instruction Set Computer Machine (ARM)'s scalable matrix extensions (SME) instruction set in an ARM central processing unit (CPU) core. The systems and methods reuse the existing SVE hardware already present in the ARM CPU core for executing the SME instruction set. The systems and methods of the present disclosure use temporal single-instruction multiple data (SIMD) processing an instruction over multiple cycles to reduce memory bandwidth needed in the ARM CPU core to process the SME instruction set.
A method, computer program product, and computing system for processing target content generated by processing source content using a generative artificial intelligence (AI) model, where the generative AI model performs a task using the source content to generate the target content. An ontological concept is extracted from the source content using a natural language processing (NLP) engine. An ontological concept is extracted from the target content using the NLP engine. An ontological concept comparison score is generated by comparing the ontological concept from the source content and the ontological concept from the target content based upon, at least in part, the task performed using the source content to generate the target content. An issue is identified in the target content based upon, the ontological concept comparison score and the task performed using the source content to generate the target content.
Systems, methods, devices, and computer readable storage media described herein provide techniques for simplifying data access and management for data computing. In an aspect, a request to load data is received. The request comprises an aliased name associated with the data. A call is transmitted to a name resolution service executing on a computing device. The call comprises the aliased name and is configured to cause the name resolution service to identify the data associated with the aliased name. A response is received from the first resolution service. The response comprises metadata of the data. The data is obtained from a data source based on the metadata. A dataset is generated based on the obtained data. A response to the request is provided. The response comprises the generated dataset. In a further aspect, an application is configured to import a library into a computer program under development.
A data processing system implements techniques for generating personalized content using a brand kit. The system receives a natural language prompt to generate content in a design application on the client device of a user and analyzes the prompt to determine whether the user intends to apply a brand kit to the generated content. The system automatically generates a brand kit for the user if one does not already exist and applies the brand kit to content generated using one or more generative models to create personalized content. The system includes a prompt generation unit that generates a plurality of model-specific prompts to the one or more generative models to cause the one or more generative models to create the personalized content.
Technology is disclosed herein for content assistance processes via foundation model integrations in software applications. In an implementation, a computing device receives natural language input from a user relating to content of a document in a user interface of an application. The computing device generates a first prompt for a foundation model to generate at least a completion to the natural language input. The computing device receives a reply to the first prompt from the foundation model which includes a completion to the natural language input. The computing device causes display of the completion in association with the natural language input in the user interface and receives user input comprising an indication to combine the input and the completion, resulting in a revised natural language input. The computing device submits a second prompt including the revised natural language input to the foundation model.
An image sensor comprises a plurality of image sensing pixels arranged to form a sensor array. Each image sensing pixel of the plurality of image sensing pixels comprises a semiconductor photodetector connected to a photosensitive region that comprises a photon reception area configured to receive photons to facilitate image capture. For at least a particular image sensing pixel of the plurality of image sensing pixels, the length or the width of the photon reception area is smaller than about 80% of a pixel pitch measurement between the particular image sensing pixel and an adjacent image sensing pixel, which contributes to reduced volume of the photosensitive region and mitigated sensor noise. A space between the photosensitive region of the particular image sensing pixel and the photosensitive region of the adjacent image sensing pixel comprises at least one oxide layer and/or at least one metal layer.
Systems and methods are provided for generating and updating a dependency graph that is used in combination with textual information about incidents to improve incident-linking suggestions. Systems and methods are also provided for generating, training, and using a machine learning model configured to perform incident linking using both graph data and text data. Beneficially, these systems and methods align the graph data and text data in order to more efficiently and accurately leverage information from the multi-modal data.
The description relates to automated binary code summarization. In one example, a binary code summarization tool receives binary code and combines the received binary code with natural language in a prompt for a large language model (LLM). The binary code summarization tool receives a semantic summarization from the LLM relating to the received binary code and evaluates the new semantic summarization for malicious functionality in the received binary code.
A computer-implemented method for selective indexing of target content is disclosed. A web hosting system hosting the target content can collect user access data for the target content, which is presented in a first language; extract user locations from the user access data; detect, from the first user locations, an area associated with a second language that is different from the first language; and evaluate a trigger condition based at least in part on comparing a content metric, which measures user access to the target content from the area, to a content threshold. Responsive to detecting satisfaction of the trigger condition, the system can translate the target content from the first language to the second language; and index the target content presented in the second language so as to enable the target content to be searched using the second language.
Asynchronously updating object detections within a video stream. A first set of objects associated a first frame include a first object detected by a first detection model. Object detection is initiated on a second frame by a second detection model. A second set of objects are identified as being associated with a third frame that is subsequent to the first frame in the video stream. The first object is included in the second set based on tracking the first object from the first frame to the third frame. A second object is identified within the second frame based on the second detection model. When the first object corresponds to the second object but has a different attribute, an attribute of the first object is updated. When the first object does not correspond to the second object. the second object is fast-tracked into the third frame.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
Disclosed are novel approaches to debugging a formula in a spreadsheet environment. An execution trace shows step-by-step how a formula is evaluated. Instead of overwhelming users by displaying a step for every atomic evaluation, multiple evaluations are displayed in the same step. This makes the execution trace compact yet intuitive, enabling users to quickly and efficiently understand how the formula is evaluated. Visualizing formula execution in this way also reduces the computing and energy costs of excess recalculations incurred by trial-and-error based debugging techniques.
Embodiments of the disclosed technologies include receiving a first query including at least one first query term and configuring at least one prompt to cause a large language model to translate the at least one first query term into a set of functions that can be executed to obtain at least one second query term and generate and output a plan that is executable to create a modified version of the first query based on the at least one second query term. The plan is obtained by applying the large language model to the at least one prompt as configured. The plan is executed to determine the at least one second query term and create the modified version of the first query. The modified version of the first query is executed to provide, via the user interface, a response to the first query.
Securing and optimizing communications for a cloud service provider includes collecting connection summary information at network interface devices associated with host computing devices for a group of resources allocated to a customer of the cloud computing environment. The connection summary information includes local address information, remote address information, and data information, each connection established via the network interface devices. At least one communication graph is generated for the group of resources using the connection summary information. The graph includes nodes that represent communication resources of the group of resources and edges extending between nodes that characterize communication between the nodes. At least one analytics process is performed on data from the graph to identify at least one of a micro-segmentation strategy, a communication pattern, and a flow prediction for the group of resources.
H04L 43/045 - Processing captured monitoring data, e.g. for logfile generation for graphical visualisation of monitoring data
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
H04L 43/08 - Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
H04L 43/55 - Testing of service level quality, e.g. simulating service usage
82.
AUDITABLE MECHANISM FOR INTERNAL SERVICES TO TRANSACT ON TENANT ENTITIES
Example aspects include techniques for providing an auditable mechanism for internal services to transact on tenant entities. These techniques may include receiving, from an internal service, by an assistant service, a service request to perform a cloud computing action over tenant data of a tenant of a cloud computing environment. In addition, the techniques may include identifying, by the assistant service, an existing principal of the assistant service within the tenant and possession of an existing permission associated with performing the cloud computing action within the tenant of the cloud computing environment. Further, the techniques may include performing the cloud computing action on behalf of the internal service based on identifying the existing principal and possession of the existing permission.
Techniques are described herein in which boot firmware validated by secure flash memory validates read-only portions of firmware stored by the firmware or a downloaded image of the read-only portions. The secure flash memory validates a portion of the firmware, which includes the boot firmware and a reference hash of the read-only portions, by comparing a calculated hash of the portion and the reference hash of the portion. The boot firmware initiates a boot of the firmware and validates the read-only portions (or the downloaded image of the read-only portions) by comparing a calculated hash of the read-only portions (or a calculated hash of the downloaded image) and the reference hash of the read-only portions. The boot firmware completes the boot of the firmware based at least on the read-only portions (or the downloaded image) being validated.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
84.
System and Method for Authenticating and Authorizing Cloud Accounts to Access On-Premises Services
A method, computer program product, and computing system for processing a request from a cloud-computing environment to access an on-premises Kubernetes application programming interface (API) server. A user associated with the request is identified. A user-specific service account for accessing the on-premises Kubernetes API server is generated. A protocol type associated with the request is determined. A user-specific reverse proxy for the user-specific service account is generated based upon, at least in part, the protocol type associated with the request. The request is forwarded to the on-premises Kubernetes API server using the user-specific reverse proxy.
A high-power multiplexer/demultiplexer (“mux/demux”) and a three-dimensional (“3D”) printed phase mask are provided for hollow-core optical fiber applications. The high-power mux/demux includes hollow core optical fiber interfaces configured to couple with free-space optical fiber cables, a diffraction grating, a 3D printed phase mask, and a set of lenses. The diffraction grating is configured, based on different wavelengths, either to at least diffract each optical signal of a plurality of optical signals having different wavelengths into two or more optical signals or to at least diffract a single optical signal having multiple wavelengths into a plurality of optical signals. The phase mask includes reflective features configured to reflect optical signals at different optical path lengths to provide reflected optical signals with different phases. The set of lenses is configured to collimate optical signals onto or from the diffraction grating or to focus optical signals onto or from the phase mask.
G02B 6/293 - Optical coupling means having data bus means, i.e. plural waveguides interconnected and providing an inherently bidirectional system by mixing and splitting signals with wavelength selective means
G02B 6/27 - Optical coupling means with polarisation selective and adjusting means
G02B 6/32 - Optical coupling means having lens focusing means
G02B 17/00 - Systems with reflecting surfaces, with or without refracting elements
86.
UTILIZING LARGE GENERATIVE MODELS TO IMPROVE BAD-QUALITY AND SUBJECTIVE DATA
The disclosure describes a subjective data application system that utilizes large generative models (LGMs) to leverage unlabeled and poorly labeled subjective data. The subjective data application system utilizes multiple instances of LGMs as label functions, which in turn creates a dependable training dataset from a collection of unlabeled subjective data. By using this reliable training data, the subjective data application system develops and trains lightweight, computationally efficient, generative models. These models are then employed to process subjective data with accuracy and speed in real-time or online applications.
The present disclosure relates to methods and systems for providing performance aware MUX selection for traffic in layer-4 load balancing. The methods and systems assign a subset of VIP ranges (VIP shards) to a subset of MUXes based on capacity of the MUXes. The methods and systems allow sources (end-hosts) in the same datacenter (DC) to select the MUXes for intra-DC traffic. The methods and systems allow the sources to use weights calculated by a controller for splitting the traffic across MUXes based on an end-to-end latency of the MUXes. The methods and systems allow the sources to know the MUXes handling the traffic by using packet modification and allow the MUXes to route the packets to reach specific MUXes.
A deep learning model is trained to learn to generate a better-quality unit test case for a focal method through reinforcement learning using a reward score that considers static code quality properties of a best coding standard. The static code quality properties include an assertion in the predicted unit test case, an invocation of the focal method in the predicted unit test case, and a descriptive name for the predicted unit test case. A reward model is trained to compute a reward score for a model-predicted unit test case based on the static code quality properties. The reward score is used in a proximal policy optimization method to produce a policy loss that updates the parameters of the deep learning model towards generating a better-quality unit test case.
A computer-implemented method is provided that generates shots for inclusion in a few-shot learning technique. The method includes generating an input, such as a prompt, for a generative model. The input includes a received example generative model input, and instructions which, when processed by the generative model, cause the generative model to generate example input instructions according to different tiers. The input is provided to the LLM, and in response the generated example input instructions are received. The generated example input instructions are stored as shots in a data store, with the computer language input.
Disclosed herein is a system for determining scores that are usable to filter a larger set of metrics (e.g., thousands of metrics) down to a smaller set of relevant metrics (e.g., hundreds of metrics) that can be more efficiently queried and ingested for root-cause analysis of an incident. During a training stage, the system analyzes known incidents and converts the names of the metrics, as described via customer-defined words, into mathematical representations (e.g., word embedding featurization vectors). When a new metric with a new name is received for a new incident, the system implements an incident inference stage during which the new name is converted into a new mathematical representation. The system compares the new mathematical representation to the mathematical representations to identify a similar mathematical representation. The system retrieves the score for the metric associated with the similar mathematical representation and assigns the retrieved score to the new metric.
H04L 41/0631 - Management of faults, events, alarms or notifications using root cause analysisManagement of faults, events, alarms or notifications using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
91.
Sentence Representation Generation for Cross-lingual Retrieval
The present disclosure proposes a method, apparatus and computer program product for sentence representation generation for cross-lingual retrieval. A target sentence may be obtained. An initial target sentence representation of the target sentence may be generated through an encoder, the encoder pretrained through a contrastive context prediction mechanism. A target sentence representation of the target sentence for cross-lingual retrieval may be generated based on the initial target sentence representation through cross-lingual calibration.
Generally discussed herein are devices, systems, and methods for. A method can include receiving, from a user through a user interface, a segmentation granularity value indicating a number of events in the transcript to be included in a summary, extracting, by a ranker model and from the transcript, a number of hints equal to the number of events, generating, by a summarizer model that includes a re-trained language model, respective summaries, one for each event, of a portion of the transcript corresponding to the event, and providing the respective summaries as an overall summary of the transcript.
A method of providing data communication between a first device and a second device includes, establishing a first communication link with a downstream device connected to the second device using a first mode via a USB-type interface, wherein in the first mode the USB-type interface utilizes a first set of USB communication lanes; establishing a second communication link with the first device via the USB-C port using an Alternate mode wherein the Alt-mode utilizes the first set of USB communication lanes; and, in accordance with establishing the second communication link, changing a mode of the first communication link so that the first communication link does not communicate via the first set of USB communication lanes.
Improved branch target buffer (BTB) structures are provided. A device can include branch target buffers storing entries corresponding to branch instructions and corresponding targets of the branch instructions. The device can include a victim cache storing a branch target buffer entry that has been evicted from a branch target buffer of the branch target buffers. The device can include branch prediction circuitry configured to access the victim cache responsive to receiving respective miss indications from each branch target buffer of the branch target buffers.
Branch target buffer structures are provided. A device can include a hierarchy of branch target buffers storing entries corresponding to branch instructions, the hierarchy of branch target buffers including respective branch target buffers that have progressively slower access times. The device can include a first program counter configured to generate a first program counter value associated with a next instruction of an executing application. The device can include a second program counter configured to predict a second program counter value that is associated with a subsequent instruction of the executing application that is after the next instruction. The device can include first branch prediction circuitry configured to populate a branch target buffer of the branch target buffers based on the second program counter value.
G06F 12/0891 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using clearing, invalidating or resetting means
A device for transmitting data from a plurality of solid core optical fibers to a hollow core fiber comprises a multiplexer; a first 4F optical system that is operative to receive the light output from the multiplexer; an amplifier disposed downstream of the first 4F optical system and upstream of a second 4F optical system, where the second 4F optical system is operative to receive amplified light output from the amplifier and output the amplified light to the hollow core fiber in a form that is compatible with the hollow core fiber.
G02B 6/32 - Optical coupling means having lens focusing means
G01M 11/00 - Testing of optical apparatusTesting structures by optical methods not otherwise provided for
G02B 6/293 - Optical coupling means having data bus means, i.e. plural waveguides interconnected and providing an inherently bidirectional system by mixing and splitting signals with wavelength selective means
G02B 27/09 - Beam shaping, e.g. changing the cross-sectioned area, not otherwise provided for
G02B 27/28 - Optical systems or apparatus not provided for by any of the groups , for polarising
97.
BROWSER-LEVEL RUNTIME SUPPLY CHAIN SECURITY AND ATTACK DETECTION
Methods, systems, apparatuses, and computer-readable storage mediums are described for enabling runtime supply chain security of web applications and the discovery of active malware attacks. For example, a server is configured to receive CSP-based data from browsers executing on various clients. Such data may be received via a browser extension or via a proxy between the web applications and the browsers. Using the CSP-based data, the server generates a database of supply chain inventory. The database specifies resources that are loaded for a particular web application, along with a location from where such resources are loaded. The database further specifies a chain of dependencies between such resources. The database is analyzed to determine whether any such resources have been compromised with malware or whether clients on which such resource have been loaded have been compromised with malware. Responsive to determining such cases, actions(s) may be performed to mitigate the malware.
Techniques of memory tiering in computing devices are disclosed herein. One example technique includes retrieving, from a first tier in a first memory, data from a data portion and metadata from a metadata portion of the first tier upon receiving a request to read data corresponding to a system memory section. The method can then include analyzing the data location information to determine whether the first tier currently contains data corresponding to the system memory section in the received request. In response to determining that the first tier currently contains data corresponding to the system memory section in the received request, transmitting the retrieved data from the data portion of the first memory to the processor in response to the received request. Otherwise, the method can include identifying a memory location in the first or far memory that contains data corresponding to the system memory section and retrieving the data from the identified memory location.
G06F 11/10 - Adding special bits or symbols to the coded information, e.g. parity check, casting out nines or elevens
G06F 3/06 - Digital input from, or digital output to, record carriers
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 12/0811 - Multiuser, multiprocessor or multiprocessing cache systems with multilevel cache hierarchies
The present disclosure relates to systems and methods implemented on a memory controller for detecting and mitigating memory attacks (e.g., row hammer attacks). For example, a memory controller may track activations of row addresses within a memory hardware (e.g., a DRAM device) and determine whether a pattern of activations is indicative of a row hammer attack. This is determined using a counting mode for corresponding memory sub-banks. Where a likely row hammer attack is detected, the memory controller may activate a sampling mode (rather than the counting mode) for a particular sub-bank to identify which of the row addresses should be refreshed on the memory hardware. The implementations described herein provide a low computational cost alternative to heavy-handed detection mechanisms that require access to significant computing resources to accurately detect and mitigate row hammer attacks.
Disclosed is a range preview system that displays data from a relevant range of cells. The range preview system intelligently elides and contextualizes data ranges for efficient visualization. The range preview system optimizes space utilization by selectively collapsing rows and columns. For example, rows and columns that are referenced by a formula may be selected for inclusion in the range preview. This conserves screen real estate while providing users with a concise overview of data ranges. The range preview system may also infer labels, providing context during formula interpretation by associating references with nearby headers or other descriptions.