Systems and methods for a formative feedback engine for providing a formative feedback engine and its functions are provided herein. In an example, the formative feedback engine may receive a written assessment from a drafting user and identify content therein. Based on the content, the formative feedback engine may generate a commentary insight based on feedback on similar content that was previously submitted. The commentary insight may be generated in an evaluation style of a reviewing user (e.g., educator). The reviewing user may review the commentary insight and modify the insight as needed before the commentary insight is provided to the drafting user, such as displayed within the written assessment for context.
Techniques for improving how LSR is performed are disclosed. A service accesses a depth image, a pose correction matrix, and a color image. The service extracts a carrier geometry from the depth image. The service forward projects the LSR carrier geometry by multiplying each vertex of the LSR carrier geometry with the pose correction matrix. While the GPU is operating in a shadow map mode, the service causes the GPU to perform a rasterization process to produce a UV map and a Z buffer. The service discards the UV map, resulting in the per pixel UV corrections included in the UV map also being discarded. The service recovers the per pixel UV corrections using the Z buffer. The service uses the recovered per pixel UV corrections to resample the color image, resulting in generation of a corrected color image.
The technology described herein improves data security by giving users control over data collected by and/or for machine-learning (ML) systems through a computing system. A first aspect of providing control to the user is the display of a data-collection status indicator. The status indicator, which may take the form of an icon or other user interface feature, communicates whether data collection is active or inactive. A second aspect of providing control to the user is providing a data-collection management interface through which the information collected can be managed. The data-collection management interface allows the user to provide and/or edit data-collection policies and to delete previously collected data. These policies can establish criteria indicating when data may or may not be collected. The data-collection management interface may allow the user to pause data collection for a period of time.
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
4.
MULTI-NETWORK ROUTING OF FULL MOTION VIDEO STREAMS IN ONE-WAY TRANSFER SYSTEMS
The present disclosure describes systems and methods relating to full motion video (FMV) routing in one-way transfer (OWT) systems. The present technology enriches datagrams of the video stream that are sent from the low-trust side of the OWT system with a global unique identifier (GUID) that is used as an identifier to determine a particular destination on the high-trust side of the OWT system. When the video stream is received on the high-trust side, the GUID is extracted and used to identify destination addresses for destination devices in the high-trust computing environment. The video stream is then delivered to the destination devices having the corresponding destination addresses. The destination devices may include further video relays of other networks that similarly use the GUID to route the video streams within their own respective networks.
Techniques for improving the training and prompt phase inferencing of a long sequence transformer are disclosed. A service shards an activation matrix and a weight matrix into chunks. The service distributes the activation matrix chunks and the weight matrix chunks to multiple computer systems. The activation matrix chunk remains stationary at each computer system. The weight matrix chunks, on the other hand, are subjected to a gathering operation in which each weight matrix chunk is used for a matrix multiplication operation against the activation matrix chunk and then replaced by a newly acquired weight matrix chunk. While the matrix multiplication operation is occurring, the service transmits the current weight matrix chunk to a new computer system and receives a new weight matrix chunk from another computer system.
Systems and methods are provided for filtering data center power load transients caused by AI workloads. In examples, a workload orchestrator receives a first signal indicating that a first plurality of compute nodes is starting a compute phase during which artificial intelligence (“AI”) workloads are executed by AI accelerators on the first plurality of compute nodes. In response to receiving the first signal, the workload orchestrator causes a second plurality of compute nodes to stop execution of general (non-AI) workloads. The workload orchestrator receives a second signal indicating that the first plurality of compute nodes has completed the compute phase and is starting a communication phase during which AI data is exchanged among the AI accelerators on the first plurality of compute nodes. In response to receiving the second signal, the workload orchestrator causes the second plurality of compute nodes to continue execution of the general workloads.
Systems and methods are provided for processor performance acceleration using hardware-enhanced multiply-accumulate streaming. In examples, a dispatcher of a processor dispatches each of two or more multiply-accumulate (“MAC”) or arithmetic logic unit (“ALU”) instructions (and corresponding input data values), which are directed to a pipeline processing system and received in two or more consecutive clock cycles, to one of a set of input registers among a plurality of sets of input registers based on a sub-stream among a plurality of sub-streams, into which the two or more MAC or ALU instructions have been divided. The input data values for the plurality of sub-streams are processed by a MAC device or an ALU device in consecutive clock cycles, with output values from each sub-stream being stored in a sub-stream accumulator for that sub-stream, the accumulated value of which are added to a pipeline accumulator after all sub-streams have been processed.
Solutions are disclosed that provide automated audience response estimation (sentiment analysis) and presenter feedback. Examples capture a plurality of multi-modal signals from a first multi-participant interaction session, such as capturing an audio feed, a video feed, a chat, and actions (e.g., hand-raising) from a video teleconference. Timing information is correlated, enabling accurate sentiment analysis across the multi-modal signals, such as nodding in agreement, detected in the video feed, is correlated with spoken words, captured in the audio clip and identified in an automated transcript. This enables reporting audience sentiment to the presenter, in near-real-time (i.e., during the teleconference) in some examples. Some examples combine multi-modal sentiment analysis results from multiple teleconferences in order to create or train a presentation coach that is able to suggest improvements to planned presentations. Some examples are able to identify a particular audience member (e.g., a VIP), and perform individualized sentiment analysis for that person.
A method for detecting an anomaly in resource utilization observed within a cloud computing platform includes observing an actual resource utilization for a customer of the cloud computing platform during an anomaly detection period; determining a historical utilization distribution for the customer that defines values of a resource utilization metric across repeated instances of a seasonal cycle; identifying a temporal location of an anomaly detection period within the seasonal cycle; filtering the historical utilization distribution to construct a distribution of seasonally-relevant values of the resource utilization metric, each value in the distribution of seasonally-relevant values corresponding to the temporal location within one of the repeated instances of the seasonal cycle; computing, based on the distribution of seasonally-relevant values, a resource utilization prediction for the customer; and automatically generating an anomaly alert in response to determining that the actual resource utilization of the customer satisfies a predefined relationship with the resource utilization prediction.
G06Q 20/40 - Authorisation, e.g. identification of payer or payee, verification of customer or shop credentialsReview and approval of payers, e.g. check of credit lines or negative lists
10.
System and Method for Single Instruction, Multiple Data (SIMD) Enhancements of ARM64 Processors
A method, computer program product, and computing system for processing a portion of data using an ARM64 processor. The portion of data is determined to be unaligned to byte boundaries. The portion of data is unpacked from a single multi-bit word into multiple fixed bit output by placing the portion of data at a byte boundary between the multiple fixed bit outputs.
Data stored in a memory circuit may be encrypted using client keys that need to be available for high-speed data processing and yet held securely to avoid unauthorized access to the encrypted data. A secure processor circuit in a processor-based system obtains client keys associated with client applications and generates secure key-encryption keys that are used to encrypt the client keys so the client keys can be securely stored in the memory circuit. In some examples, data keys for encrypting data blocks associated with the client application may be generated from the client key, encrypted by a data key-encryption key generated in the secure processor circuit, and stored in the memory circuit. In such examples, because the client keys and data keys are encrypted while in memory, they are safer from software attacks on the memory circuit, which improves the security of the encrypted data blocks.
Systems and methods are provided for implementing a hollow core fiber-based line-system with an out-of-band optical supervisory channel. A first transponder, located at a first location, transmits optical data traffic at a first wavelength range to a second transponder, at a second location, over at least one hollow core fiber (“HCF”) cable communicatively coupling the first and second transponders over a fiber path distance that is at least 60 kilometers. The first transponder further transmits an optical supervisory channel (“OSC”) signal at a second wavelength to the second transponder over the at least one HCF cable. The OSC signal corresponds to diagnostic data associated with transmission of the optical data signal and/or associated with operation of a system including the first and second transponders. The second wavelength is separate from the first wavelength range by at least a threshold wavelength separation.
This disclosure describes a framework for generating audio translations (e.g., dubbing) of videos, including being performed locally on a client device. For instance, this disclosure describes a video dubbing system that utilizes length-aware speech translation models to provide dynamic audio translations for videos that accurately align with the source audio. In particular, the video dubbing system utilizes length-aware translations to prevent audio misalignment of translated audio, resulting in natural-sounding audio translations. Additionally, the video dubbing system uses techniques such as beam search to efficiently determine dynamic translated audio from multiple versions that align accurately with the source audio. As further described below, the video dubbing system seamlessly provides translated audio phrases in real time that dynamically add or remove words to match the duration of the source audio phrases, resulting in a much more natural dubbing experience.
A computer-implemented method includes receiving an output from a generative machine learning model. The output includes a definition of a graphical user interface (GUI) component configured to receive user input for refining an input. The definition of the GUI component is according to a predefined schema. Based on the definition, executable code for rendering the GUI component is generated, and the code is caused to be executed to render the GUI component. The method provides a mechanism for rendering dynamically-generated GUI components for refining inputs to generative machine learning models.
Power distribution using electrolyte fluid is disclosed. Electrolyte fluid is charged at a charging stack using electricity from an electrical power source. The charged electrolyte fluid is flowed through an electrolyte loop to a load stack. At the load stack, electrochemical energy in the charged electrolyte fluid is used to supply electricity to power an electrical load.
This disclosure describes a proactive deployment impact system that detects and addresses the security impact of candidate code-based infrastructure changes before they are deployed in a production environment within a cloud computing system. The proactive deployment impact system implements a lightweight preemptive security framework, based on runtime resource information, to determine whether a requested candidate code-based infrastructure change would introduce new security risks, attack patterns, or breach vulnerabilities. Furthermore, the proactive deployment impact system can actively block the deployment of negatively impacting candidate changes, report potential security breaches, and/or automatically modify the candidate changes to eliminate security vulnerabilities.
A service computes a self-attention of a long sequence transformer. The computation is two-dimensional, with a first dimension being along a Q-dimension and a second dimension being along a KV-dimension. The service determines that the Q-dimension does not carry any data dependencies but that the KV-dimension does carry one or more data dependencies. The service splits the Q-dimension and distributes those splits to a processor grid. The service splits the one or more data dependencies along the KV-dimension and distributes those splits to the processor grid. The service performs a reduction operation to obtain a final result. The service distributes the final result among the processors.
G06F 7/78 - Arrangements for rearranging, permuting or selecting data according to predetermined rules, independently of the content of the data for changing the order of data flow, e.g. matrix transposition or LIFO buffersOverflow or underflow handling therefor
18.
HYBRID MACHINE LEARNING MODEL WITH LAYER DECOMPOSITION
The technology described herein is related to a hybrid neural network that divides operations of a neural network layer between a trusted environment and an untrusted environment. In an aspect, a first portion of nodes (or neurons) in a layer operate on a trusted device and a second portion of nodes in the layer operate on the untrusted device. The layer output is generated by combining the result produced by the nodes in the trusted and untrusted environments. The technology described herein decomposes a pretrained neural network by identifying a small amount (relative to total nodes in the layer) of nodes in a layer that provide the largest contribution to an accurate model result. The technology described herein may use a matrix decomposition technique to identify components (e.g., singular values and singular vectors) that are able to reproduce the original matrix, but vary in the importance of information they hold.
A service computes a self-attention of a long sequence transformer. The computation is two-dimensional, with a first dimension being along a Q-dimension and a second dimension being along a KV-dimension. The service determines that the Q-dimension does not carry any data dependencies but that the KV-dimension does carry one or more data dependencies. The service splits the Q-dimension and distributes those splits to a processor grid. The service splits the one or more data dependencies along the KV-dimension and distributes those splits to the processor grid. The service performs a reduction operation to obtain a final result. The service distributes the final result among the processors.
Techniques for improving the training and prompt phase inferencing of a long sequence transformer are disclosed. A service shards an activation matrix and a weight matrix into chunks. The service distributes the activation matrix chunks and the weight matrix chunks to multiple computer systems. The activation matrix chunk remains stationary at each computer system. The weight matrix chunks, on the other hand, are subjected to a gathering operation in which each weight matrix chunk is used for a matrix multiplication operation against the activation matrix chunk and then replaced by a newly acquired weight matrix chunk. While the matrix multiplication operation is occurring, the service transmits the current weight matrix chunk to a new computer system and receives a new weight matrix chunk from another computer system.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
An example includes a processor, a first memory, and a second memory. The first memory includes an action store. The action store includes a hierarchical arrangement of memory layers and actions stored in one or more of the memory layers. The action store stores first mappings among actions and second mappings among actions and memory layers. The first mappings among actions are hierarchical and an action is executable by an agent. The second memory includes an instruction to cause the processor to create or update the first mappings, the second mappings, or the first mappings and the second mappings, in response to a signal received from the agent via a device. The signal indicates input including a task, feedback relating to an execution of one of the actions by the agent, and/or performance data associated with the action.
An example sends query input and a first instruction to a generative machine learning model (GMLM). The first instruction is to cause the GMLM to generate and output a query input summary. The query input summary and a second instruction are sent to the GMLM. The second instruction is to cause the GMLM to provide query results using multiple different queries and the query input summary. The query results and a third instruction are to cause the GMLM to summarize a comparison of a query result with the query input summary. A query result evaluation summary is generated and output by the GMLM. A signal is received from a position within a presentation of the query result evaluation summary. The first instruction is updated to include the signal in the query input summary.
The present disclosure generally relates to optimizing load-balancing of network endpoints using tree collectives representing a logical network communication topology for the network endpoints. Systems and methods described herein eliminate the previously restrictive conditions imposed on tree-based communication collectives by generating collective trees with any arity and representing any number of physical network endpoints. The resulting collective trees ensure that each represented network endpoint has a number of outgoing flows and a number of incoming flows that are no more than the arity of the collective tree. In this way, the described systems and methods inject significant efficiencies into communication collectives within networked compute nodes by eliminating communication bandwidth latencies and bottlenecks.
A technique compresses a pretrained model having a sequence of model parts that produce output results of the same shape, to produce a compressed model. The technique includes converting the pretrained model into a difference-based model having instances of difference-based weights, converting the difference-based model into a reduced-dimension model having instances of reduced-dimension weights, and then fine-tuning the reduced-dimension model. Each instance of difference-based weights expresses the difference between neighboring instances of full weights in the pretrained model. The execution of the compressed model includes generating an instance of full weights associated with a particular fine-tuned model part of the compressed model. This is performed by combining instances of weights associated with different levels of the compressed model. The technique significantly reduces the amount of resources that are required to store and run a machine-trained model.
A computing system (1) is provided, including a quantum computing device (10) that includes a plurality of qubit islands (40). Each of the qubit islands includes a plurality of topological superconducting nanowires (42). Majorana zero modes (46) are instantiated at respective topological nanowire endpoints (47) of the topological superconducting nanowires. Each of the qubit islands further includes a trivial superconducting nanowire (50) that couples the topological superconducting nanowires. Each of the qubit islands further includes a Josephson junction (52) that couples the trivial superconducting nanowire to a ground (54).
An example provides a multi-agent system. Via an orchestrator agent, a query input is received. Via the orchestrator agent, the query input and a first instruction are provided to a generative machine learning model (GMLM). The first instruction is to cause the GMLM to determine queries using the query input. Via a query execution agent, the queries execute in parallel. Via the orchestrator agent, it is determined whether the queries are executing. Via a query evaluation agent, query results of execution of the queries and a second instruction are provided to the GMLM. The second instruction is to cause the GMLM to generate, for each query result, a query result summary. Via the orchestrator agent, the query result summaries are used to determine a subset of the query results for presentation via a device.
In a computing network implementing an adaptive load balancing scheme using entropy values (EVs) to select network paths, the next expected packet sequence numbers (PSNs) sent along different paths are tracked. A generation number is increased to obtain a new EV and a last probe packet is sent to clear an old EV. If a starting PSN is divisible by a number k, an entropy slot is derived for each PSN using a modulo function based on k.
The present disclosure relates generally to systems and methods for updating an input prompt for a generative AI model (e.g., an LLM) based on feedback that is provided in connection with an output from the generative AI model that is unsatisfactory. For example, where a user indicated that an output from the generative AI model is incorrect, inaccurate, or is an otherwise unsatisfactory response to an input prompt, this disclosure describes models to facilitate generation of feedback hints and/or additional information that can be included within an updated prompt that, when provided as an input to the generative AI model, has an improved likelihood to return an output that is in-line with user expectations. Indeed, features of the systems and methods described herein provide a framework for improving outputs of generative AI models that are more accurate or otherwise responsive to the input prompts.
G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
The present disclosure describes systems and methods relating to full motion video (FMV) routing in one-way transfer (OWT) systems. The present technology enriches the datagrams of the video stream that are sent from the low-trust side of the OWT system with a global unique identifier (GUID) that is used as an identifier to determine a particular destination on the high-trust side of the OWT system. The enriched video stream is divided into its video stream and enrichment components, which are then filtered separately by a guard for the high-trust side. The filtered components may then be rejoined to reform the enriched video stream. When the enriched video stream is received on the high-trust side, the GUID in the datagram is extracted and used to identify destination addresses for destination devices in the high-trust computing environment. The video stream is then delivered to the destination devices having the corresponding destination addresses.
H04N 21/647 - Control signaling between network components and server or clientsNetwork processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load or bridging between two different networks, e.g. between IP and wireless
H04N 21/222 - Secondary servers, e.g. proxy server or cable television Head-end
H04N 21/235 - Processing of additional data, e.g. scrambling of additional data or processing content descriptors
30.
Method and System of Providing Security for Anonymous Autodiscover Services
A method and system for securing an anonymous discovery service may include receiving a request from a client device, the request being directed to an anonymous Autodiscover service, identifying a source from which the request originated from within the client device, and responsive to the source being of a first type of sources, transmitting a first response to the client and responsive to the source being of a second type of sources transmitting a second response to the client. The first response does not return a Uniform Resource Locator (URL) to a service endpoint, the second response returns a URL to a service endpoint. Furthermore, the anonymous discovery service may be a discovery service that requires no authentication.
The present disclosure relates to methods and systems for sharing with a plurality of users a chat session that uses large language models to provide responses for input messages received for the chat session. The methods and systems provide access to the chat session to the users and update the chat session in response to any changes made to the chat session by any of the users. The methods and systems allow the users to resume the chat session at a future time using the chat session history.
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
H04L 51/216 - Handling conversation history, e.g. grouping of messages in sessions or threads
32.
EFFICIENT ERROR CHAIN LOGGING IN A MULTI-CORE, ASYNCHRONOUS EXECUTION ENVIRONMENT
Methods, apparatuses, and products for efficient error chain logging in a multi-core, asynchronous execution environment, including: generating, in response to detecting an error at an error originating function in a chain of functions, a chain of error records that includes an error record for the error originating function and error records for each function at a higher-level than the error originating function, including: storing, in each error record other than the error record for the error originating function, error information that includes an identification of an error record for an immediately-lower level function; and storing each error record in a data structure for a core that is executing at least a portion of the chain of functions, wherein each core maintains a distinct data structure for storing error records.
A computerized method determines whether an image group is an outlier with respect to a set of reference image groups. Reference feature vectors are generated for each image in the reference image groups and, using those reference feature vectors, reference statistical vectors are generated. Each reference statistical vector is associated with a reference image group. An input image group is received, and input feature vectors are generated based on the images of the input image group. The input feature vectors are used to generate an input statistical vector associated with the input image group. Outlier analysis is performed using the input statistical vector and the reference statistical vectors and it is determined that the input image group is an outlier with respect to the reference image groups based on the performed outlier analysis. An automatic data analysis operation is then performed based on the outlier status of input image group.
G16H 30/40 - ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/75 - Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video featuresCoarse-fine approaches, e.g. multi-scale approachesImage or video pattern matchingProximity measures in feature spaces using context analysisSelection of dictionaries
G16H 30/20 - ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
An in-cable fiber shuffle is described, the in-cable fiber shuffle having a first end and a second end. The in-cable fiber shuffle comprises a first plurality of optical fibers, each optical fiber comprising a continuous length of fiber from a first end to a second end. The in-cable fiber shuffle further comprises a first plurality of multi-fiber connectors at the first end of the in-cable fiber shuffle and a second plurality of multi-fiber connectors at the second end of the in-cable fiber shuffle. Each one of the first plurality of multi-fiber connectors are connected to the first ends of a different, non-overlapping subset of the first plurality of optical fibers and each optical fiber connected to a respective one of the first plurality of multi-fiber connectors is connected at its second end to a different one of the second plurality of multi-fiber connectors.
Methods and apparatuses for improving the speed, quality, and relevance of automated responses provided by a question answering system for security data are described. The question answering system may generate and utilize a large language model that is trained to combine the language of security data, such as the language found in security logs and alerts, with natural language text. Given an input prompt (or a search query) from an end user of the question answering system, the question answering system may identify relevant content from the security data and display a response based on the relevant content. The question answering system may allow the end user of the question answering system to query security logs using natural language text without requiring the end user to provide a structured query and without requiring the security data be parsed and ingested into a database system.
A hybrid battery system (HBS) for supplying power during long-term outages and short-term outages for a datacenter and related methods are described. An example HBS comprising a set of solid-state hydrogen batteries (SSHBs) and a set of rechargeable batteries (RBs) is configured to supply power to compute resources associated with a datacenter. The hybrid battery system is coupled to fuel cells to supply hydrogen to the fuel cells by heating an SSHB. A power control system is configured to: (1) during a short-term outage associated with the datacenter, selectively cause a subset of the set of RBs to supply power to the compute resources, and (2) during a long-term outage associated with the datacenter, selectively cause heat to be supplied to the set of SSHBs, resulting in a supply of hydrogen to one or more of the fuel cells, allowing supply of power to the compute resources.
H02J 9/06 - Circuit arrangements for emergency or stand-by power supply, e.g. for emergency lighting in which the distribution system is disconnected from the normal source and connected to a standby source with automatic change-over
G06F 1/30 - Means for acting in the event of power-supply failure or interruption, e.g. power-supply fluctuations
H01M 8/04082 - Arrangements for control of reactant parameters, e.g. pressure or concentration
H01M 10/48 - Accumulators combined with arrangements for measuring, testing or indicating the condition of cells, e.g. the level or density of the electrolyte
The techniques disclosed herein enable systems to enhance the efficiency of computer networking devices through a streamlined routing lookup structure for storing network routes and processing network packets. The routing lookup structure can comprise a lookup table, a set of range tables, and a default range table. The lookup table can process a portion of a network packet address with additional processing at a range table if the portion matches an entry at the lookup table. If there are no matches at the lookup table, the packet is processed by the default range table. The routing lookup structure can further include a secondary lookup table and dynamic optimization for reconfiguring the routing lookup structure to adapt to network conditions. In addition, the routing lookup structure can store new routes based on the length of the prefix. Routes that fall below a minimum length are stored in the default range table.
A threat investigation agent performs investigative operations for autonomously investigating a potential cyber security threat. The investigative operations include preparing and transmitting inputs to a language model that include at least known threat event information pertaining to the potential cyber security threat, a function list describing functions that execute different types of investigative operation, and instructions directing a language model to return an output identifying a next investigative action that the language model selects as appropriate based on the known threat event information and the function list. The threat investigation agent discovers additional threat event information by executing the next investigative action in response to receiving the output from the language model, updating the known threat event information to include the additional threat event information, and repeating the investigative operations subsequent to updating the known threat event information.
Embodiments of the present disclosure include techniques for synchronized telemetry aggregation and buffering in a system-on-chip (SoC). A first set of telemetry data associated with operation of a plurality of processor cores of the SoC during a first epoch is received. A second set of telemetry data associated with operation of the plurality of processor cores during a second epoch is received. The first set of telemetry data is determined as corresponding to an incomplete set of telemetry data for the first epoch. A message is transmitted to one or more controllers of the plurality of processor cores to modify operations associated with telemetry data collection as a result of the determination.
Requests for artifact data for a containerized application cluster are proxied from a first region of a cloud service to a second region of the cloud service, or another cloud service. A request for the artifact data is received at a first container registry at the first region. The first container registry determines that the artifact data is locally unavailable and in response thereto identifies a peer container registry at the second region that has a copy of the artifact data. A request is sent to the peer container registry for the copy of the artifact data. If the request originated from a local client, then the artifact data is forwarded to the local client via an application programming interface, in some examples.
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
H04L 67/06 - Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
Devices, systems, and methods for secure modular addition and subtraction are provided. A modular adder and subtractor circuit with masking circuit includes an arithmetic to Boolean (A2B) conversion operator configured to convert (i) a second sum and (ii) a value determined based on a first sum, to Boolean resulting in first and second Boolean values, a shifter configured to (i) make a most significant bit of the first Boolean value a least significant bit resulting in a shifted first Boolean value and (ii) make the most significant bit of the second Boolean value a least significant bit resulting in a shifted second Boolean value, and a Boolean to arithmetic (B2A) conversion operator, configured to convert a representation of the shifted first Boolean value and a representation of the shifted second Boolean value to arithmetic representation resulting in first and second arithmetic values, respectively.
H04L 9/06 - Arrangements for secret or secure communicationsNetwork security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
The present disclosure describes systems and methods relating to full motion video (FMV) routing in one-way transfer (OWT) systems. The present technology enriches datagrams of the video stream that are sent from the low-trust side of the OWT system with a global unique identifier (GUID) that is used as an identifier to determine a particular destination on the high-trust side of the OWT system. The video stream may be encapsulated in a Secure Reliable Transport (SRT) header that includes the unique identifier. When the SRT-wrapped video stream is received on the high-trust side, the GUID in the SRT-header is extracted and used to identify destination addresses for destination devices in the high-trust computing environment. The video stream is then delivered to the destination devices having the corresponding destination addresses.
Methods, apparatuses, and products for securely deploying cloud-native applications, including: creating, within a tenant's cloud deployment, a secure execution environment for an application, wherein the secure execution environment is managed exclusively by a cloud service provider; deploying, within the secure execution environment, the application, wherein source code for the application is stored in the secure execution environment; and deploying an agent within the secure execution environment, wherein the agent is configured to allow one or more conforming requests to access the application, block one or more tenant-initiated management operations for the secure environment, and allow one or more management operations for the secure environment that are initiated by the cloud service provider.
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
A method executed by a host computer system with a processor system involves initiating a guest context that depends on a filesystem image stored in a remote image repository. A reflector disk is generated for the filesystem image, representing data blocks without storing them. Upon receiving a read request at the reflector disk specifying an offset and length within the filesystem image, a set of data blocks is retrieved from the remote repository corresponding to the specified range. The reflector disk then provides the retrieved data blocks to the requester, enabling efficient access to filesystem data without storing the actual data blocks locally.
Implementations for simplified and efficient error correction code schemes with multiple codewords are provided. One aspect includes a computing system comprising: processing circuitry and memory storing instructions that, during execution, causes the processing circuitry to: encode data comprising user data and metadata in a memory module comprising a plurality of dies by: logically partitioning the memory module into a plurality of partitions; on a first partition: storing a first portion of the user data on one or more memory segments of the first partition; and storing a first cyclic redundancy check (CRC) codeword and the metadata on a memory segment of the first partition different from the one or more memory segments of the first partition, wherein the first CRC codeword is generated based on the first portion of the user data and the metadata; and storing parity data on one of the plurality of dies.
Systems and methods herein provide an interface engine and its related functions. In an example, a method includes identifying, by an interface engine, an event involving a peripheral for a client device. The peripheral is an external device that extends the functionality of the client device. Responsive to identifying the event, the interface engine determines connection capabilities of the client device and determines connection for the peripheral based on the connection capabilities of the client device and the event. The interface engine may then generate and send a notification of the connection to the client device.
The described technology provides a coolant system for a computing system rack, the coolant system including a hose configured to deliver coolant to and from a computing system rack, one or more sensors configured to measure one or more parameters of the coolant flowing through the hose and to communicate the one or more measured parameters to a hose controller, and an auto-valve configured on the hose to control the flow of coolant through the hose. The hose controller may be configured on the computing system rack and may be configured to control the auto-valve based on one or more of temperature of the coolant, flow level of the coolant, and pressure of the coolant.
A method executed by a computer system with a processor system involves analyzing read profiling data associated with a first filesystem image to determine the sequence in which a guest context accessed multiple files during its startup. Subsequently, a second filesystem image is created based on this profiling data, comprising various data block sets, each set representing files accessed by the guest context. The data block sets are arranged in the second filesystem image according to the order in which the guest context accessed the corresponding files, ensuring a sequential writing process. This innovative approach optimizes filesystem organization and retrieval efficiency based on actual usage patterns during system initialization.
An energy manager stores and manages target energy policies for a processing device. Each target energy policy defines one or more policy-triggering criteria and at least one target energy action in response to satisfaction of the policy-triggering criteria. Each set of policy-triggering criteria depends, at least in part, on a power usage score that the energy manager dynamically determines for the processing device based on energy consumption metrics captured for the processing device.
Techniques for implementing an AI threat modeling tool are disclosed. A static analysis tool is used to extract a candidate code snippet from a code repository. The candidate code snippet is identified as potentially being a security relevant code element. The static analysis tool generates additional context associated with the candidate code snippet. An LLM prompt is generated. This prompt is structured to include the candidate code snippet, the context, and a directive to assign a classification to the candidate code snippet. The classification includes a source classification, a sink classification, a sanitizer classification, or a flow step classification. The LLM operates on the prompt to generate output comprising a specific classification for the candidate code snippet. The output is formatted into a data extension file that is consumable by the static analysis tool.
The disclosure includes a sensitivity detection system that accurately and efficiently determines when information based on a user’s browsing activity unintentionally reveals private or other sensitive information about the user. For example, the sensitivity detection system generates and utilizes machine learning models for detecting sensitivity to accurately detect when sensitive user information is being leaked from a collection of user information, such as a user profile. Additionally, upon determining that sensitive user information is being revealed, in many instances, the sensitivity detection system performs mitigation actions to stop and/or reduce sensitive user information from being undesirably revealed.
Systems and methods herein provide a compute engine and its related functions. In an aspect, a compute engine may determine current resources that are provisioned to provide a service within a cloud-based or hybrid environment. The compute engine may determine a provisioning profile for the current resources, including the hardware and software specifications of the current resources. The compute engine may also determine a current utilization of the current resources by the service based on the provisioning profile. Based on the current utilization, the compute engine may determine a current compute landscape including supply data for a grouping of compute configurations. From the compute configurations, the compute engine may select a compute configuration for the service based on the utilization of the current resources and the current compute landscape.
Methods, systems, and apparatuses include receiving network data for nodes of a graph network of an online system. Logging data is received for entities associated with the nodes Weakly labeled data is generated by filtering the logging data using the network data. Training data is generated for a relationship scoring machine learning model. The relationship scoring machine learning model is trained to determine relationship scores for the nodes by using the training data. Input data is generated for the trained relationship scoring machine learning model. The trained relationship scoring machine learning model is coupled to an input of a recommendation system to provide a recommendation.
Systems and methods to determine a measured risk of a service outage of a service in a cloud computing system. A system determines service dependencies and evaluates parity drift status information associated with the dependencies using an outage projection model (e.g., a machine learning model, heuristic, and/or a combination of models) trained/otherwise operative to identify a pattern of parity drift status information correlated to a historical pattern associated with a past service outage. The system determines an outage risk score and/or level representing the measured risk of a service outage occurring for the service based on the correlation. The system further provides the outage risk score and/or level (e.g., to a remediation and/or deployment orchestration system). In some examples, an alert is provided when the outage risk score and/or level satisfies a threshold (e.g., is highly indicative of a potential service outage) to proactively facilitate prevention of an outage.
A system implements techniques for efficiently determining that an update deployed by a foundational service has caused a regression based on an aggregate health determination associated with tenant services and/or cloud resource provider services that depend upon the foundational service. The deployment of the update is initiated by an entity (e.g., an engineering team) tasked with operating and/or managing the foundational service. Accordingly, the system described herein can generate and provide a communication, to the foundational service (e.g., entity), indicating that a regression has likely been caused by the update and/or instructing the foundational service to halt the deployment of the update.
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
56.
MULTI-NETWORK ROUTING OF FULL MOTION VIDEO STREAMS IN ONE-WAY TRANSFER SYSTEMS
The present disclosure describes systems and methods relating to full motion video (FMV) routing in one-way transfer (OWT) systems. The present technology enriches datagrams of the video stream that are sent from the low-trust side of the OWT system with a global unique identifier (GUID) that is used as an identifier to determine a particular destination on the high-trust side of the OWT system. When the video stream is received on the high-trust side, the GUID is extracted and used to identify destination addresses for destination devices in the high-trust computing environment. The video stream is then delivered to the destination devices having the corresponding destination addresses. The destination devices may include further video relays of other networks that similarly use the GUID to route the video streams within their own respective networks.
Double-data rate (DDR) transmission within a system-on-chip (SoC) via network-on-chip (NOC) interconnects is described. A method for transmitting data includes a local programmable delay circuit receiving a source synchronous clock (SSC) signal and outputting a delayed source synchronous clock (SSC) signal. The method further includes a local pulse generator, associated with a first NOC interconnect stage, receiving the delayed SSC signal and generating a first pulse in response to a first phase of the delayed SSC signal and generating a second pulse in response to a second phase of the delayed SSC signal. The method further includes a flop-repeater circuit, associated with the first NOC interconnect stage, capturing and launching data received from the source circuit in response to each of the first pulse and the second pulse. The method further includes a local offset circuit receiving the delayed SSC signal and providing a de-skewed SSC signal.
The present disclosure describes systems and methods relating to full motion video (FMV) routing in one-way transfer (OWT) systems. The present technology enriches the datagrams of the video stream that are sent from the low-trust side of the OWT system with a global unique identifier (GUID) that is used as an identifier to determine a particular destination on the high-trust side of the OWT system. The enriched video stream is divided into its video stream and enrichment components, which are then filtered separately by a guard for the high-trust side. The filtered components may then be rejoined to reform the enriched video stream. When the enriched video stream is received on the high-trust side, the GUID in the datagram is extracted and used to identify destination addresses for destination devices in the high-trust computing environment. The video stream is then delivered to the destination devices having the corresponding destination addresses.
The present disclosure describes systems and methods relating to full motion video (FMV) routing in one-way transfer (OWT) systems. The present technology enriches datagrams of the video stream that are sent from the low-trust side of the OWT system with a global unique identifier (GUID) that is used as an identifier to determine a particular destination on the high-trust side of the OWT system. The video stream may be encapsulated in a Secure Reliable Transport (SRT) header that includes the unique identifier. When the SRT-wrapped video stream is received on the high-trust side, the GUID in the SRT-header is extracted and used to identify destination addresses for destination devices in the high-trust computing environment. The video stream is then delivered to the destination devices having the corresponding destination addresses.
An example provides a multi-agent system. At a managing agent, a first lookup of the registry determines a task agent to perform a task. A task invocation is sent to the task agent. The task invocation is to cause the task agent to initiate the task. A skill is received from the task agent. A second lookup of the registry is to determine whether to maintain a thread when invoking a skill agent to perform the skill. Responsive to determining that a thread is not to be maintained to invoke the skill agent to perform the skill, the task agent is identified in a call to the skill agent. The call is to invoke the skill agent to perform the skill asynchronously to the task initiated by the task agent. Via the managing agent, the task agent is to provide a response to the task to a device.
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
61.
DECONSTRUCTING GRAPHS USING A DATABASE MANAGEMENT APPROACH
Embodiments of the disclosed technologies are capable of deconstructing a graph using database management approaches. Embodiments describe receiving an event trigger associated with a first node of a graph. The event trigger includes a node identifier of the first node. The embodiments further describe generating a first query for a neighbor identifier associated with a second node of the graph using the node identifier. The neighbor identifier is queried from a first data store. Embodiments further describe generating a second query for a feature corresponding to the neighbor identifier. The feature is queried from a second data store. Embodiments further describe generating an embedding of the event trigger using the node identifier, the neighbor identifier and the feature.
A computing system includes a processor; and memory storing instructions that, when executed by the processor, cause the processor to perform several acts. The acts include receiving multiple messages from multiple users in a messaging application that supports group conversations, where the multiple messages are included in a group conversation. The acts also include providing a prompt to a generative model, where the prompt includes the multiple messages. The acts additionally include receiving, from the generative model, an output generated by the generative model based upon the prompt and including the output as a turn in the group conversation.
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
H04L 51/04 - Real-time or near real-time messaging, e.g. instant messaging [IM]
H04L 51/216 - Handling conversation history, e.g. grouping of messages in sessions or threads
63.
CHANGING FOV AND RESOLUTION FOR CALIBRATING SCANNING DISPLAY SYSTEM ALIGNMENT
One example provides a display device comprising a scanning display system comprising a left-eye projector and a right-eye projector. The display device further comprises a controller configured to control the scanning display system to, in a display mode, output stereoscopic display images using the left-eye projector and the right-eye projector. The stereoscopic display images comprise a first field of view (FOV) and a first resolution. The controller is further configured to control the scanning display system to, in an alignment mode, output a left-eye alignment image and a right-eye alignment image respectively using the left-eye projector and the right-eye projector. One or more of the left-eye alignment image or the right-eye alignment image comprises a second FOV that is smaller than the first FOV, and a second resolution that is higher than the first resolution.
Systems and methods are provided for implementing quality assurance for digital technologies using language model (“LM”)-based artificial intelligence (“AI”) and/or machine learning (“ML”) systems. In various embodiments, a first prompt is provided to an LM actor or attacker to cause the LM actor or attacker to generate interaction content for interacting with test software. Responses from the test software are then evaluated by an LM evaluator to produce evaluation results. In some examples, a second prompt is generated that includes the responses from the test software along with the evaluation criteria for the test software. When the second prompt is provided to the LM evaluator, the LM evaluator generates the evaluation results.
A device comprising a base and a region for receiving an optical fibre holder that is immobilised with respect to the base. The optical fibre holder comprises a clip operable between an open position in which an optical fibre is moveable with respect to the optical fibre holder and a closed position in which the optical fibre is immobilised with respect to the optical fibre holder for longitudinal, rotational, and transverse movement. The device also comprises a clamp rotatably mounted on the base to enable rotation about an axis coincident with a longitudinal axis of a held optical fibre. The clamp is for clamping a held optical fibre and is operable between an unclamped position, and a clamped position in which the optical fibre is immobilised with respect to the base for longitudinal and transverse movement of the optical fibre but rotatable about the longitudinal axis of the optical fibre.
Artificial intelligence (AI) techniques for connection networking are described. A method comprises receiving a first vector by an embedding layer of a decision transformer, the first vector comprising entity trajectory features associated with an entity identifier of a connection network system, generating a first entity trajectory embedding from the set of entity trajectory features by the embedding layer, the first entity trajectory embedding comprising a sequence of values representing a first state, a first action, and a first reward associated with a first timestep, generating a predicted action embedding based on the first entity trajectory embedding by the decision transformer, the predicted action embedding comprising values representing a predicted action to achieve a total reward given the first state, the first action, and the first reward, selecting a target content item based on the predicted action embedding, and causing presentation of the target content item on a user interface.
Various embodiments discussed herein relate to generating a customizable route along a road network. Some embodiments generate a representation of a road network that includes locations and connections between those locations. Some embodiments generate a frequency-based lookup table that includes probabilities that a particular route can be selected from possible routes from one location to another. In response to receiving a request to generate a route, some embodiments select a location in the road network and, using the frequency-based lookup table, generate a route to another location in the road network. The route to the other location is then presented using a map interface.
An optical repeater for communicating optical signals between a transmission device and a reception device through an optical transmission medium includes an optical receiver and an optical transmitter each positioned on a transmission path of the optical signals. The optical receiver is configured to detect an input spatiotemporal pattern transmitted from the transmission device, and the optical transmitter is configured to transmit an output spatiotemporal pattern to the reception device, the output spatiotemporal pattern being based at least in part on the input spatiotemporal pattern.
This document relates to computer networking. For instance, the disclosed techniques provide for prioritizing packets based on Quality of Service (“QoS”) levels within a networking device, such as a router. A router or other networking device consistent with the disclosed implementations can perform priority masking of received packets by comparing QoS priority values of the received packets to a designated QoS priority level. Packets having QoS priority values exceeding the designated QoS priority level can proceed to arbitration, whereas other packets can be masked to prevent them from proceeding to arbitration.
The present disclosure generally relates to optimizing load-balancing of network endpoints using tree collectives representing a logical network communication topology for the network endpoints. Systems and methods described herein eliminate the previously restrictive conditions imposed on tree-based communication collectives by generating collective trees with any arity and representing any number of physical network endpoints. The resulting collective trees ensure that each represented network endpoint has a number of outgoing flows and a number of incoming flows that are no more than the arity of the collective tree. In this way, the described systems and methods inject significant efficiencies into communication collectives within networked compute nodes by eliminating communication bandwidth latencies and bottlenecks.
A computer-implemented method, computer program product and computing system for: defining a cloud compute instance within a cloud computing platform; and enabling access from the cloud compute instance to a local management endpoint within a local management service to enable management of at least one of the cloud compute instance and a cloud resource via the local management endpoint opened within the local management service while bypassing the at least one of the cloud frontend and the management layer.
In some examples, a seed query is received. A set of candidate queries is retrieved from a database based on matching the seed query with each candidate query. Based on the seed query, a ranking model is used to assign a relevance score to each candidate query. The relevance score indicates suitability of the candidate query as a follow-up query to the seed query (“next query” suitability). At least one candidate query is outputted based on the relevance scores. For example, candidate queries may be ordered or filtered based on their relevancy scores. In some implementations, an action is instigated automatically or semi-automatically based on a model-generated response to a candidate query. In some implementations, generative model calls are instigated on the seed query and a next query selected from the candidate next queries.
G06F 16/383 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
Described herein are techniques for evaluating large language model outputs through autonomous generation of context-aware evaluation criteria without relying on static human-defined standards. The approach enables dynamic generation of evaluation criteria tailored to specific instructions and responses, while incorporating context-specific knowledge crucial for accurate assessment. A framework implements both absolute evaluation against reference answers and relative comparison between multiple responses. Knowledge distillation techniques create efficient smaller models capable of criteria generation and evaluation with performance comparable to larger models. The technique demonstrates significant improvements in evaluation accuracy across diverse tasks while reducing computational costs through optimized model architectures. Additionally, the approach enhances preference-based learning through dynamically generated evaluation criteria, improving model alignment with human judgment.
An example provides a multi-agent system. At a managing agent, a first lookup of the registry determines a task agent to perform a task. A task invocation is sent to the task agent. The task invocation is to cause the task agent to initiate the task. A skill is received from the task agent. A second lookup of the registry is to determine whether to maintain a thread when invoking a skill agent to perform the skill. Responsive to determining that a thread is not to be maintained to invoke the skill agent to perform the skill, the task agent is identified in a call to the skill agent. The call is to invoke the skill agent to perform the skill asynchronously to the task initiated by the task agent. Via the managing agent, the task agent is to provide a response to the task to a device.
An example sends query input and a first instruction to a generative machine learning model (GMLM). The first instruction is to cause the GMLM to generate and output a query input summary. The query input summary and a second instruction are sent to the GMLM. The second instruction is to cause the GMLM to provide query results using multiple different queries and the query input summary. The query results and a third instruction are to cause the GMLM to summarize a comparison of a query result with the query input summary. A query result evaluation summary is generated and output by the GMLM. A signal is received from a position within a presentation of the query result evaluation summary. The first instruction is updated to include the signal in the query input summary.
This document relates to techniques for rendering realistic sounds in a virtual scene, such as in a video game or simulation. The disclosed techniques can account for the state of various portals, such as windows or doors, in the virtual scene. The states can range from fully open to fully closed. For instance, the disclosed techniques can identify various portals that are on paths between a sound source and a listener in the virtual scene and then attenuate sound energy arriving at the listener based on the state of those portals.
An original email text is evaluated against a set of properties to identify shortcomings in the natural language, format, and structure used in the original email text. The properties represent characteristics needed to enhance the original mail text to a professionally-written style. A first machine learning model identifies the missing properties or characteristics in the original email that exist given a few-shot context. The few-shot context includes labeled email samples containing the desired properties. A second machine learning model is used to generate an enhanced email given the original email, personal data relevant to the content of the original data, the missing properties and the few-shot context.
This document relates to generation of new images from input images depicting different objects. For instance, the disclosed techniques can generate a layout that specifies locations of the objects. Then, a canvas can be generated depicting the objects in the specified locations. A generative image model can be employed to inpaint areas of the canvas around the depicted objects, while the objects themselves retain their original appearance from the input images.
In some examples, a seed query is received. A set of candidate queries is retrieved from a database based on matching the seed query with each candidate query. Based on the seed query, a ranking model is used to assign a relevance score to each candidate query. The relevance score indicates suitability of the candidate query as a follow-up query to the seed query ("next query" suitability). At least one candidate query is outputted based on the relevance scores. For example, candidate queries may be ordered or filtered based on their relevancy scores. In some implementations, an action is instigated automatically or semi-automatically based on a model-generated response to a candidate query. In some implementations, generative model calls are instigated on the seed query and a next query selected from the candidate next queries.
An optical repeater for communicating optical signals between a transmission device and a reception device through an optical transmission medium includes an optical receiver and an optical transmitter each positioned on a transmission path of the optical signals. The optical receiver is configured to detect an input spatiotemporal pattern transmitted from the transmission device, and the optical transmitter is configured to transmit an output spatiotemporal pattern to the reception device, the output spatiotemporal pattern being based at least in part on the input spatiotemporal pattern.
A device may provide a prompt to a first machine learning model. The prompt may include at least one instruction to cause the first machine learning model to use at least first natural language input associated with a use of a conversational search system to rank data sources, generate a first search query and reasoning, and use the first search query and the reasoning to generate a second search query. The first search query may include data obtained from at least one of the ranked data sources. The reasoning may include an explanation of how the first machine learning model generated the first search query. A second machine learning model may synthesize a response determined via execution of the second search query. The synthesized response may be provided for presentation via the conversational search system.
The presently disclosed compliant magnetic locking mechanisms provide mechanism(s) and method(s) for latching a cover (e.g., a rear cover) to a device body that is quick assembling, low cost, easily recyclable, secure, and/or leaves no exposed fasteners. A bi-stable (or mono-stable) compliant mechanism inside the cover actuates a set of latches in and out. This action serves to lock and unlock the cover or other removable portion of the computing device. The compliant magnetic locking mechanisms may contain an array of magnets, which can be used in conjunction with a magnetic key to actuate the compliant mechanism from the exterior.
This disclosure describes utilizing an image model protection system to improve the defensive robustness of a large generative image model against the generation of harmful digital images. For example, the image model protection system uses digital signatures of identified harmful images to determine whether a particular harmful image was generated by a specific large generative image model. Using digital signatures, the image model protection system matches the harmful image to images generated by the large generative image model. The image model protection system then identifies the prompt used to generate the image at the large generative image model. Furthermore, the image model protection system uses the harmful prompt to implement new security measures to safeguard the large generative image model against the generation of similar harmful images in the future.
Methods and apparatuses for improving the performance and energy efficiency of machine learning systems that generate security specific machine learning models and generate security related information using security specific machine learning models are described. A security specific machine learning model may comprise a security specific large language model (LLM). The security specific LLM may be trained and deployed to generate semantically related security information. The security specific LLM may be pretrained with a security specific data set that was generated using similarity deduplication and long line handling, and with security specific objectives, such as next log line prediction based on host, system, application, and cyber attacker behavior. The security specific large language model may be fine-tuned using a security specific similarity dataset that may be generated to align the security specific LLM to capture similarity between different security events.
Example implementations relate to methods, apparatuses, and computer-readable media for providing reusable user experience components for interacting with a generative artificial intelligence (AI). An AI service hosted in a network receives a first prompt from an application or a user thereof to a generative AI. An orchestration layer of the generative AI configured to generate an execution plan and chain-of-thought for the prompt identifies a first skill from a skill library that best answers the prompt. The AI service obtains a schema for the first skill and returns the schema to a control loader of the application that invokes pre-written code for the first skill with inputs specified in the schema to generate a user interface component. The AI service provides a context of the user interface component to the generative AI.
A technique compresses a pretrained model having a sequence of model parts that produce output results of the same shape, to produce a compressed model. The technique includes converting the pretrained model into a difference-based model having instances of difference-based weights, converting the difference-based model into a reduced-dimension model having instances of reduced-dimension weights, and then fine-tuning the reduced-dimension model. Each instance of difference-based weights expresses the difference between neighboring instances of full weights in the pretrained model. The execution of the compressed model includes generating an instance of full weights associated with a particular fine-tuned model part of the compressed model. This is performed by combining instances of weights associated with different levels of the compressed model. The technique significantly reduces the amount of resources that are required to store and run a machine-trained model.
Double-data rate (DDR) transmission within a system-on-chip (SoC) via network-on-chip (NOC) interconnects is described. A method for transmitting data includes a local programmable delay circuit receiving a source synchronous clock (SSC) signal and outputting a delayed source synchronous clock (SSC) signal. The method further includes a local pulse generator, associated with a first NOC interconnect stage, receiving the delayed SSC signal and generating a first pulse in response to a first phase of the delayed SSC signal and generating a second pulse in response to a second phase of the delayed SSC signal. The method further includes a flop-repeater circuit, associated with the first NOC interconnect stage, capturing and launching data received from the source circuit in response to each of the first pulse and the second pulse. The method further includes a local offset circuit receiving the delayed SSC signal and providing a de-skewed SSC signal.
H03K 5/14 - Arrangements having a single output and transforming input signals into pulses delivered at desired time intervals by the use of delay lines
H03K 5/06 - Shaping pulses by increasing durationShaping pulses by decreasing duration by the use of delay lines or other analogue delay elements
88.
SYNERGISTIC COLLABORATION BETWEEN WEAK AND STRONG LANGUAGE MODELS
Described herein are techniques for improving language model performance through collaborative interaction between specialized and general-purpose models. A specialized model with fewer than ten billion parameters undergoes supervised fine-tuning on domain-specific data and generates initial outputs. These outputs are refined by a general-purpose model having over one hundred billion parameters and advanced reasoning capabilities. The framework implements preference tuning where outputs from both models are evaluated to generate preference triplets that optimize the performance of the specialized model. This approach achieves significant accuracy improvements while maintaining data privacy and computational efficiency.
An example includes a processor, a first memory, and a second memory. The first memory includes an action store. The action store includes a hierarchical arrangement of memory layers and actions stored in one or more of the memory layers. The action store stores first mappings among actions and second mappings among actions and memory layers. The first mappings among actions are hierarchical and an action is executable by an agent. The second memory includes an instruction to cause the processor to create or update the first mappings, the second mappings, or the first mappings and the second mappings, in response to a signal received from the agent via a device. The signal indicates input including a task, feedback relating to an execution of one of the actions by the agent, and/or performance data associated with the action.
An example provides a multi-agent system. Via an orchestrator agent, a query input is received. Via the orchestrator agent, the query input and a first instruction are provided to a generative machine learning model (GMLM). The first instruction is to cause the GMLM to determine queries using the query input. Via a query execution agent, the queries execute in parallel. Via the orchestrator agent, it is determined whether the queries are executing. Via a query evaluation agent, query results of execution of the queries and a second instruction are provided to the GMLM. The second instruction is to cause the GMLM to generate, for each query result, a query result summary. Via the orchestrator agent, the query result summaries are used to determine a subset of the query results for presentation via a device.
The description relates to hinged devices, such as hinged computing devices that include a pop-up function. One example can include a first portion and a second portion that are rotatably secured through a range of rotation from an open orientation to a closed orientation. This example can also include a selective isolation assembly configured to convert rotational torque associated with rotating the first and second portions toward the closed orientation to a compressive force that compresses a spring. The selective isolation assembly is configured to disconnect the first and second portions and the compressed spring as the first and second portions approach the closed orientation.
A semiconductor package assembly including: a package layer; an interposer layer electrically coupled to the package layer; and a compute die including one or more processing elements, wherein the compute die is electrically coupled to the interposer layer via one or more electrical connections positioned proximate the geometric center of the one or more processing elements.
H01L 23/50 - Arrangements for conducting electric current to or from the solid state body in operation, e.g. leads or terminal arrangements for integrated circuit devices
H01L 23/488 - Arrangements for conducting electric current to or from the solid state body in operation, e.g. leads or terminal arrangements consisting of soldered or bonded constructions
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Downloadable computer software using artificial intelligence for sales management, customer service management, business operations management, financial and finance operations management, remote and field service operations management, project service automation management, marketing campaign planning and management, customer relationship management, enterprise resource management, sales and marketing information management, personnel and project management, facilities management, Internet of Things management, software development and customization, the analysis, warehousing, processing and management of data, Big Data management, security and authentication, s¬¬ocial media management, peer-to-peer communications, email account management, word processing; downloadable cloud computing software for sales management, customer service management, business operations management, financial and finance operations management, remote and field service operations management, project service automation management, marketing campaign planning and management, customer relationship management, enterprise resource management, sales and marketing information management, personnel and project management, facilities management, Internet of Things management, software development and customization, the analysis, warehousing, processing and management of data, Big Data management, security and authentication, social media management, peer-to-peer communications, email account management, word processing; downloadable computer software using artificial intelligence for business management, accounting, and marketing in the fields of business; downloadable computer software, namely, computer ecommerce applications software for allowing users to perform electronic business transactions via a global computer network; downloadable computer software for supply side management, customer relationship management, financial management and accounting Software as a service (SaaS) services featuring software using artificial intelligence for sales management, customer service management, business operations management, financial and finance operations management, remote and field service operations management, project service automation management, marketing campaign planning and management, customer relationship management, enterprise resource management, sales and marketing information management, personnel and project management, facilities management, Internet of Things management, software development and customization, the analysis, warehousing, processing and management of data, Big Data management, security and authentication, social media management, peer-to-peer communications, email account management, word processing, and cloud computing for use with all of the foregoing; online information technology [IT] consultancy services in the field of computers, computer software and computer systems, namely, 24/7 service desk or help desk services for IT infrastructure, operating systems, database systems, and web applications; consultancy in the design and development of computer hardware; computer software consultancy; consulting in the field of cloud computing networks and applications; computer diagnostic services; updating of computer software for others; technical support, namely, troubleshooting of diagnostic computer hardware and software problems; providing online updating of computer software for others via the Internet
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Downloadable computer software for creating, building, designing, and developing websites; downloadable computer software for launching, posting, updating, and maintaining websites; downloadable computer software, namely, software development tools for the creation of websites, computer software applications and mobile software applications; downloadable computer software using artificial intelligence for creating, building, designing, and developing websites Software as a Service (SaaS) services for creating, building, designing, and developing websites; Software as a Service (SaaS) services for launching, posting, updating, and maintaining websites; Software as a Service (SaaS) services, namely, software development tools for the creation of websites, computer software applications and mobile software applications; providing online, non-downloadable software featuring artificial intelligence for creating, building, designing, and developing websites
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Downloadable computer software development tools for the creation of computer software applications and mobile software applications; downloadable computer software for publishing, sharing and accessing computer software applications and mobile software applications; downloadable computer software development tools using artificial intelligence for the creation of computer software applications and mobile software applications; downloadable computer software using artificial intelligence for publishing, sharing and accessing computer software applications and mobile software applications Software as a Service (SaaS) featuring computer software development tools for the creation of computer software applications and mobile software applications; Software as a Service (SaaS) featuring software for publishing, sharing, and accessing computer software applications and mobile software applications; providing online, non-downloadable software featuring artificial intelligence for software development tools
96.
SYSTEM AND METHOD FOR PROCESSING QUERIES AGAINST SEMANTIC CACHE ENTRIES USING UNIQUE DISTANCE-BASED THRESHOLDS
A method, computer program product, and computing system for processing a dataset of query-answer pairs including generating synthetic variations of queries from a dataset of query-answer pairs, generating an embedding dataset by transforming the synthetic variations of queries into synthetic query embeddings and queries in the dataset of query-answer pairs into query embeddings, storing at least a portion of the synthetic query embeddings and query embeddings in a semantic cache, wherein each synthetic query embedding and query embedding stored in the semantic cache is associated with a respective distance threshold determined based at least in part on a measure of semantic similarity between the synthetic query and the query used to generate a particular query embedding, and processing a subsequent query using the synthetic variations of queries from the semantic cache and the distance thresholds.
An original email text is evaluated against a set of properties to identify shortcomings in the natural language, format, and structure used in the original email text. The properties represent characteristics needed to enhance the original mail text to a professionally-written style. A first machine learning model identifies the missing properties or characteristics in the original email that exist given a few-shot context. The few-shot context includes labeled email samples containing the desired properties. A second machine learning model is used to generate an enhanced email given the original email, personal data relevant to the content of the original data, the missing properties and the few-shot context.
A method executed within a virtual machine (VM) host computer system. The method includes, during a pre-execution phase of a VM, mapping a physical address (PA)-backed range of the VM’s guest-physical memory pages to host-physical addresses (HPAs) within the system memory of the VM host and mapping a virtual-address (VA)-backed range of the VM’s guest-physical memory pages to host-virtual addresses (HVAs). The method also includes, during an execution phase of the VM, performing intercept processing when the VM accesses a guest-physical address (GPA) within the VA-backed range to map a corresponding HVA to an HPA. This method enables memory oversubscription of the VM by the VM host via the VA-backed memory range. At the same time, the method provides the VM with access to a PA-backed memory range that can be utilized without the potential performance penalties associated with using the VA-backed range.
This document relates to generation of new images from input images depicting different objects. For instance, the disclosed techniques can generate a layout that specifies locations of the objects. Then, a canvas can be generated depicting the objects in the specified locations. A generative image model can be employed to inpaint areas of the canvas around the depicted objects, while the objects themselves retain their original appearance from the input images.
The description relates to hinged devices, such as hinged computing devices that include a pop-up function. One example can include a first portion and a second portion that are rotatably secured through a range of rotation from an open orientation to a closed orientation. This example can also include a selective isolation assembly configured to convert rotational torque associated with rotating the first and second portions toward the closed orientation to a compressive force that compresses a spring. The selective isolation assembly is configured to disconnect the first and second portions and the compressed spring as the first and second portions approach the closed orientation.