Techniques for implementing and utilizing capacity cluster resource reservations are described. A managed compute service receives a user's request to identify a capacity block for use in launching compute instances. A schedule with pre-computed blocks, each corresponding to an amount of instances and an amount of time, is used to identify a block satisfying the request's criteria. The user can later obtain the capacity block and launch instances into the reservation during its time window, where placement rules associated with the reservation ensure the instances are hosted in locations enabling low-latency intercommunications.
Devices and techniques are generally described for audio-based entity resolution. In various examples, first audio data representing speech comprising a mention of a first entity may be received. In some examples, first embedding data representing the first audio data may be received. Second embedding data representing the first entity may be determined. A first modified embedding may be generated using a first attention mechanism to compare the first embedding data to the second embedding data. In some examples, a determination may be made that the first audio data includes a mention of the first entity.
Described herein is a multi-directional floating mechanism that can be attached to a robot. The mechanism compensates for misalignment when the robot is installing a component during an assembly process. The mechanism is provided between the robotic arm and an attachment of the robot that interacts with the components. The mechanism includes two plates that are fixed together but can translate and rotate by a certain amount with respect to one another to compensate for the misalignment. When a task is complete and a force is no longer being exerted on the mechanism, the top plate and bottom plate return to alignment (for example, based on a magnetic force between magnets embedded in both plates).
H01R 43/00 - Apparatus or processes specially adapted for manufacturing, assembling, maintaining, or repairing of line connectors or current collectors or for joining electric conductors
Techniques for a knowledge-graph system to use large language models (LLMs) to build knowledge graphs to answer queries submitted to a chatbot by users. The knowledge-graph system builds the knowledge graph using answers produced by an ELM for novel queries. The chatbot will continue to use the ELM to answer novel queries, but the chatbot may harness the knowledge graph to answer repeat questions to gain various efficiencies over LLM- backed chatbots. For example, the knowledge-graph system may easily debug or otherwise improve the answers in knowledge graphs, store provenance information in knowledge graphs, and augment the knowledge graphs using other data sources. Thus, the reliability and correctness of chatbots will be improved as the bugs and inaccuracies in answers provided by the ELM will be corrected in the knowledge graphs, but the chatbots can still harness the abilities of LLMs to provide answers across various subject-matter domains.
A system (100; 200; 500-700) and supporting method (300, 400) enable receipt (302) of a computer-coded policy for execution in a control plane associated with a cloud environment to provide data governance in a data plane using one or more data assets of the cloud environment, where the one or more data assets are automatically associated (310) to the computer-coded policy using a set of pre-determined rules associated with the computer-coded policy and using annotations associated with the one or more data assets, and where dynamic changes (404) are to be performed with respect to the annotations based in part on real-time changes to the computer-coded policy to allow monitoring (406) contents of the one or more data assets in accordance with the computer-coded policy and to perform (412) a remediation action that is associated with the one or more data assets in response to a violation associated with the computer-coded policy.
Systems and methods are provided for managing computing services for an application comprising a plurality of virtual computing components executing on one or more host computing devices, wherein a service virtual computing component is to perform application functionality, and wherein a system computing component is to perform system functionality including management of the application virtual computing component; determining the service virtual computing component is to execute using a first access credential to provide a first computing service to the application virtual computing component, and the service virtual computing component is to execute using a second access credential to provide a second computing service to the system computing component, wherein the first access credential is assigned a different set of computing resource access permissions than the second access credential.
Systems and methods are provided for an on-demand code execution service comprising a set of computing devices for on-demand execution of function code while continuing to facilitate executing long-running background processes. A subset of resources may be initialized based, at least in part, on the application configuration data including at least a request-response process, a background process, and a lesser set of computing resources for the background process. After the execution of the background process has begun, a first request may be received. The on-demand code execution service may increase computing resources to a larger set of computing resources to generate a first response to the first request. The first response may then be provided to an external set of computing resources. After determining that the queue contains no additional requests, the on-demand code execution service may decrease the level of computing resources to the lesser set of computing resources.
Generative pre-trained large language models (LLMs) can create domain-specific text answers in various formats like JSON, XML, HTML, SQL, or programming languages. However, LLMs may "hallucinate," generating incorrect or nonsensical answers that diverge from reality, thus eroding trust in their outputs or worse. Disclosed techniques use a sampling-based approach and an equivalence checker. Multiple answers (samples) to a prompt are generated by the LLM; if they are equivalent, the LLM is likely answering correctly. If the samples disagree or contradict, it's more likely that the LLM is hallucinating, or the prompt is ambiguous. An automated reasoning equivalence checker is utilized to verify the samples' functional equivalency, providing a method to detect and possibly rectify hallucination issues in LLM-generated answers.
Examples herein provide an approach to enhance an audio mixture of a teleconference application by switching between noise suppression modes using a single model. Specifically, a machine learning (ML) model may be configured to, in response to receiving an audio mixture representation as input, suppress either a background noise of the audio mixture or suppress all noise of the audio mixture except a user's voice. In some examples, the ML model may be trained on speech and background noise training data during a training phase. In addition, the ML model may be trained on a user's voice during an enrollment phase. In addition, during an inference phase, the ML model may enhance the audio mixture by suppressing a portion of the audio mixture.
G10L 25/30 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the analysis technique using neural networks
H04M 3/56 - Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities
A plurality of location indications may be received that indicate locations of an object at a plurality of times. A plurality of geofence indications that indicate that the object is within a geofence may be generated based on the plurality of location indications. A plurality of notifications of a plurality of geofence events corresponding to the object may be provided, to an account, based on the plurality of geofence indications. The plurality of geofence events may include a geofence entering event and a geofence exiting event. An out-of-order location indication may be detected within the plurality of location indications. A retroactive geofence event regarding which the account has not yet been notified may be determined based on the out-of-order location indication. An additional notification of the retroactive geofence event may be provided to the account.
Techniques for caching in a machine learning model (ML) hosting service are described. ML model usage data is aggregated from host usage data provided from each host of a first set of hosts, the ML model usage data including, for a particular ML model, a number of inference requests to the particular ML model. A priority order of hosts in a second set of hosts to service an inference request for the particular ML model is calculated. Based on the ML model usage data and the priority order, a set of ML models to load to a particular host in the second set of hosts is determined. The particular host is caused to load the set of ML models. A router is updated to direct ML model inference requests amongst the second set of hosts.
An index is created with split documents to retrieve and augment generation of a response to a natural language request using a generative machine learning model. When a natural language request is received, a search representation is generated and used to retrieve candidate portions of documents from the index. A relevancy ranking is performed to identify relevant portions of documents from the candidates and provide the relevant portions to prompt a generative machine learning model to provide a result for the natural language request.
Distributed orchestration of data retrieval for generative machine learning model may be performed. When a natural language request to perform a natural language task is received that is associated with a generative application, one or more data retrievers may be selected to access associated data repositories according to a previously specified retrieval configuration for the generative natural language application. The data may then be obtained by the selected data retrievers and used to generate a prompt to a generative machine learning model. A result of the generative machine learning model may then be used to provide a response to the natural language request to perform the natural language task.
Techniques for using a language model (e.g., a large language model (LLM)) to generate a natural language response to a user input and prosody information (e.g., voice characteristics associated with a synthetic voice to output the natural language response to the user) are described. The prosody information may correspond to a natural language (e.g., text or tokenized) description, a spectrogram, and/or a latent representation of the voice characteristic(s) associated with the natural language response. In some embodiments, the natural language response and the prosody information may be generated by different portions of layers of the language model. In such embodiments, the output of the layer(s) of the language model configured to generate the natural language response may be provided to the layer(s) of the language model configured to generate the prosody information and the output may be used to generate the prosody information, and vice versa.
Techniques for unlearning concepts in the use of a pre-trained generative machine learning model are described. A description of a concept to be unlearned in use of a pre-trained generative machine learning model is received. Negative prompts and positive prompts are processed with the pre-trained generative machine learning model to generate associated activation volume maps. A set of conditions to differentiate activation volume maps associated with negative prompts from activation volume maps associated with positive prompts is identified. A model adapter is generated, the model adapter to use a set of different model parameters when processing of a prompt by the pre-trained generative machine learning model satisfies the set of conditions.
Techniques for analyzing access control policies across multiple provider networks. These techniques compile various policies into a unified policy language broad enough to include diverse policy features, yet specific enough for automated analysis. An automated differential testing method is employed to confirm the accuracy of this compilation by generating access requests, ensuring both original and translated policies consistently grant or deny access. Moreover, an abstraction technique is used to simplify and correlate the complex details of different policies, enabling easier user inquiries about them. For instance, users can determine if an account has write access in one network but not in another. This abstraction sometimes involves replacing actions in original policies, ensuring their compatibility in the target policy language.
Technologies directed to providing a wireless chipset with integrated radar for presence detection and localization for natural and ambient interactions with a device are described. A wireless device can send via a first antenna, a set of chirps in a first portion of a frame. The wireless device can receive, via a second antenna, reflected signals corresponding to the set of chirps, and generate in-phase and quadrature (IQ) samples. The wireless device sends (or receives), data in a second portion of the frame to (or from) a second device. The wireless device generates, using RF signals sent or received by the wireless device, channel state information (CSI) data representing channel properties of a channel. The wireless device can determine, using the IQ samples and the CSI data, that an environment in which the wireless device is located has been disrupted by a presence or motion of a person.
G01S 7/00 - Details of systems according to groups , ,
G01S 13/56 - Discriminating between fixed and moving objects or between objects moving at different speeds for presence detection
G01S 13/00 - Systems using the reflection or reradiation of radio waves, e.g. radar systemsAnalogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
G01S 13/34 - Systems for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated using transmission of continuous, frequency-modulated waves while heterodyning the received signal, or a signal derived therefrom, with a locally-generated signal related to the contemporaneously transmitted signal
G01S 13/536 - Discriminating between fixed and moving objects or between objects moving at different speeds using transmission of continuous unmodulated waves, amplitude-, frequency-, or phase-modulated waves
18.
LARGE LANGUAGE MODEL (LLM)-BASED CORRECTION BASED ON A MULTI-TOOL PROMPT
Techniques for large language model (LLM)-based correction based on a multi-tool prompt are described. In an example, a computer system receives, via a user interface, user input including user-provided information and indicating a request for a task to be performed on the user-provided information. The computer system generates, by using an LLM associated with a prompt, a first input to a first tool based on the user input. The prompt indicating a sequence of steps to perform for the task and tools available to the LLM. The first tool corresponds to a first step of the sequence of steps. The computer system determines, by using the LLM, a first output of the first tool in response to the first input and an update to the user-provided information based on the first output and a completion of the task. The computer system causes the user interface to present the update.
Disclosed are various embodiments for an artificial intelligence (AI) assistant to configure and manage radio-based networks such as cellular networks. In one embodiment, an AI language model is taught to recognize a deployment configuration grammar for deploying radio-based networks. A prompt is received from a customer to generate at least a portion of a deployment configuration for a network function in a radio-based network. The deployment configuration, or portion thereof, is generated by the AI language model according to an intent expressed in the prompt.
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
H04W 24/02 - Arrangements for optimising operational condition
Techniques for determining one or more responses associated with one or more components that are responsive to a user input are described. The system receives a user input and causes one or more components to generate one or more responses associated with the user input. The system determines one or more of the responses are responsive to the user input, causes one or more actions associated with the responses to be performed, and outputs a natural language summary of the one or more responses. If the system determines that none of the responses are responsive to the user input and/or an ambiguity exists with respect to the user input, the system can generate a request for additional information usable to resolve the ambiguity, which may be sent to another component of the system and/or output to the user that provided the user input.
A computer network organized in a logical grid having rows and columns can include network nodes coupled according to harmonics. Each network node can be coupled to network nodes of the same row using a set of horizontal strands according to a set of horizontal harmonics. Each of the horizontal harmonics specifies a node distance along the row between adjacent connection points on the corresponding horizontal strand. Each network node can also be coupled to network nodes of the same column using a set of vertical strands according to a set of vertical harmonics. Each of the vertical harmonics specifies a node distance along the column between adjacent connection points on the corresponding vertical strand.
A system that detects the location of a user gaze at a display and in response to the duration of the gaze exceeding a threshold, auto-playing content on the display. The system may also determine gaze event data associating the gaze event with the source of the content the user is gazing at. Other information may also be associated with the gaze event such as user ID, time/duration data, or the like. Various actions can be taken in response to the gaze event such as auto-playing of content, outputting a visual indication of the detected gaze, interpreting detected speech using the gaze event data, data aggregation, etc.
Devices and techniques are generally described for privacy preservation for computer vision models. In some examples, a first field of text and a second field of text may be detected in a first image. A first alpha-numeric text string may be detected in the first field and a second alpha-numeric text string may be detected in the second field. A first sub-image including the first alpha-numeric text string may be generated and a second sub-image including the second alpha-numeric text string may be generated. The first sub-image may be sent to a first computing device for annotation and the second sub-image may be sent to a second computing device for annotation.
An object storage system includes mass storage devices that implement general storage for objects stored in the object storage system and additionally includes other storage devices, such as solid-state drives, that provide higher performance storage access. The object storage system implements a common access interface for accessing both accelerated access objects (who are eligible to have cached copies stored on the higher performance storage devices) and non-accelerated access objects stored in the general storage. The cache is fully managed by the service and no changes are required for client applications to receive accelerated access to objects that are classified as accelerated access objects per a customer configurable acceleration policy for the object or for a bucket in which the object is stored.
Systems and methods are provided for a natural language question answering service to provide answers to natural language questions regarding network-based services or computing domains. The natural language question answering service may receive the natural language question from a customer computing device. An aggregator of the natural language question answering service can retrieve passages from search systems based on the question and generate a prompt. A large language model (LLM) of the natural language question answering service may receive the prompt and provide an answer. The answer may be verified by a verifier of the natural language question answering service. Attribution may be applied to the answers and retrieved passages to produce references, inline citations, and similar questions. A watermarking module of the natural language question answering service may watermark the answer if it is verified.
Automated scaling-related operations may be performed dynamically during execution of a spatial simulation. A spatial partition may be locally reassigned, based on application workload information, from a first application to a second application on the same worker. A quantity of applications on a worker may also be changed during execution of a spatial simulation. A parent spatial partition may be split into child spatial partitions, and child partitions may also be merged back into a common parent partition. Indications of partition splits and merges on each of a plurality' of workers may be reported to the plurality of workers. A spatial partition may also be remotely reassigned from a first worker to a second worker, such as based on worker-level resource consumption information and partition information. A quantity of workers that are used to implement a spatial simulation may also be changed during execution of the spatial simulation.
Techniques for generating a prompt for a language model to determine an action responsive to a user input, are described. In some embodiments, the system receives a user input, determines one or more application programming interfaces (APIs) configured to perform actions that are relevant to the user input and exemplars representing examples of using the APIs with respect to user inputs similar to the current user input. The system further determines device states of devices that are determined to be related to the user input and also determines other contextual information (e.g., weather information, time of day, geographic location, etc.). The system generates a prompt including the user input, the APIs, the exemplars, the device states, and the other contextual information. A language model processes the prompt to determine an action responsive to the user input and the system causes performance of the action.
Systems and techniques for moderating responses of a generative language model are described herein. Some user inputs to a generative language model may include biases, misinformation, and other references to moderated content. To prevent the generative language model from generating responses that promote these forms of moderated content, the techniques described determine a policy corresponding to the determined moderated content category of the user input. The determined policy may correspond to a template of instructions for how the generative language model is to respond to such moderated content. The output of the generative language model may also be moderated before being presented to the user.
Techniques for cryptographic messaging are described. In some examples, cryptographic messaging is enabled on a device that at least includes an input/output port configured to receive an encrypted message from a coupled external device; a hardware security module (HSM) configured to decrypt the encrypted message, wherein the HSM is to include storage to store at least one private key to be used to decrypt the encrypted message; and a screen to display contents of the decrypted message.
This disclosure describes techniques for optimizing motion detection in systems that use passive infrared (PIR) detectors. The techniques comprise generating a first signal and a second signal by two sets of detector elements. The techniques further involve calculating a sum of the absolute values of the first signal and the second signal. The sum is compared to a threshold to determine motion. Alternatively, The techniques may comprise determining whether a slope of either the sum or the difference exceeds a threshold slope value. If the slope exceeds the threshold slope value, a determination may be made as to whether an amplitude of the sum or difference exceeds a detection threshold. If the amplitude does exceed the detection threshold, then a motion detection event is detected. If the slope does not exceed the threshold slope value, the first signal and the second signal are zeroed out.
G08B 29/18 - Prevention or correction of operating errors
G08B 13/191 - Actuation by interference with heat, light, or radiation of shorter wavelengthActuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using infrared-radiation detection systems using pyroelectric sensor means
Techniques for constraining the results of a generative language model to valid information using knowledge-grounded documentation. A generative language model may generate invalid results, including compound entities and incorrect entity relations. The techniques include, for a given user inquiry, determining a set of documented information, from a particular knowledge base, that corresponds to the user inquiry. The techniques further include determining a subgraph from a knowledge graph representing the knowledge base, as well as determining a trie data structure representation of the set of documented information. The user inquiry and subgraph are provided as input to a trained generative language model for generating a response to the user inquiry. The techniques include using the trie data structure to validate that the generated response corresponds to real information from the set of documented information.
A placement plan for training state checkpoints of a machine learning model is generated based at least in part on a number of training servers of a distributed training environment. The plan indicates, with respect to an individual server, one or more other servers at which replicas of training state checkpoints of the individual server are to be stored. During selected periods of one or more training iterations of the model, respective portions of a replica of a training state checkpoint of a first server are transmitted to a second server selected based on the placement plan. After an event causes disruption of the training iterations, one of the checkpoints generated at the first server is retrieved from the second server and used to resume the training iterations.
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
G06F 16/25 - Integrating or interfacing systems involving database management systems
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
G06N 3/098 - Distributed learning, e.g. federated learning
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
33.
SYNTHETIC DATA GENERATION FOR MACHINE LEARNING MODELS
Techniques for generating synthetic data for machine learning (ML) models are described. A system includes a language model that processes a task and a corresponding set of example inputs to generate another input, referred to herein as a machine-generated data. The machine-generated data is processed using a ML model (that data is being generated for) to determine a model output, and the model output is analyzed to determine whether it corresponds to a target output. If the model output corresponds to the target output, then the machine-generated data is added to the set of example inputs and one of the original example inputs is removed to generate an updated set of example inputs. The updated set can be used for various training techniques.
A device is configured to detect multiple different wakewords. A device may operate a joint encoder that operates on audio data to determine encoded audio data. The device may operate multiple different decoders which process the encoded audio data to determine if a wakeword is detected. Each decoder may correspond to a different wakeword. The decoders may use fewer computing resources than the joint encoder, allowing for the device to more easily perform multiple wakeword processing. Enabling / disabling wakeword(s) may involve the reconfiguring of a wakeword detector to add / remove data for respective decoder(s). Specific decoders may be activated / deactivated depending on device context, thereby efficiently managing device resources.
A system and method for an infrastructure as code (IaC) environment includes automatically generating different infrastructure code for different timepoints based at least in part on changes in infrastructure configurations at the different timepoints of a first virtual private cloud (VPC) so as to execute one of the different infrastructure code for a second VPC to cause deployment of one of the infrastructure configurations; and includes automatically generating or updating a script based at least in part on changes in the infrastructure code, the script to be used to deploy at least a version of the infrastructure configurations with the changes in a second VPC of a different geographical location than associated with the first VPC.
H04L 41/0663 - Performing the actions predefined by failover planning, e.g. switching to standby network elements
H04L 41/0668 - Management of faults, events, alarms or notifications using network fault recovery by dynamic selection of recovery network elements, e.g. replacement by the most appropriate element after failure
H04L 41/0816 - Configuration setting characterised by the conditions triggering a change of settings the condition being an adaptation, e.g. in response to network events
H04L 69/40 - Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
H04L 41/046 - Network management architectures or arrangements comprising network management agents or mobile agents therefor
H04L 41/084 - Configuration by using pre-existing information, e.g. using templates or copying from other elements
H04L 41/0895 - Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
H04L 41/40 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
A vehicle software deployment management system generates a modifiable deployment plan and one or more associated vehicle software modules that is sent to an edge device at a vehicle activity site. The edge device stores the modifiable deployment plan and the one or more vehicle software modules for updating a given vehicle at a future time. In some embodiments, the vehicle may have insufficient network connectivity to perform the update remotely from the vehicle activity site. In some embodiments, the modifiable deployment plan may be modified by the edge device at the future time when the vehicle is present at the vehicle activity site, based on vehicle information obtained by the edge device and based on vehicle user/technician input obtained by the edge device.
Disclosed are various embodiments related to an entitlement service aggregation layer. In one embodiment, a request to enable or disable an entitlement for a device is received from a first system operated by or on behalf of a communication service provider (CSP). The device is identified by a unique device hardware identifier. A manufacturer of the device is identified based at least in part on the unique device hardware identifier. A second system operated by or on behalf of the manufacturer of the device is identified. The request to enable or disable the entitlement is sent to the second system. A result associated with the request is provided to the first system.
A cargo area within a delivery vehicle can include a storage area storing items for destinations along a route. A staging area may be accessible from inside and outside of the cargo area. A dispenser within the cargo area may provide item transfer from the storage area and to the staging area. For example, the dispenser may include a carriage for aligning an extendable and retractable arm operable to transfer an item hanging from a channel onto the arm and from the arm to the staging area. A controller may access route information about proximity to a delivery destination, access item information about an item assigned for that delivery destination, and cause the dispenser to move the assigned item from the storage area to the staging area for making the assigned item accessible from outside the cargo area in response to the proximity to the delivery destination.
Approaches for predicting manufacturability of a peptide are provided. A request for information related to manufacturability of a peptide can be received. A determination as to whether the peptide is predicted to be synthesizable can be made, such as by using a machine learning model. The machine learning model can be trained on data including manufacturer specifications and descriptions associated with a peptide and features for peptides. A second determination can be made as to whether the peptide is predicted to be soluble, using the same or different machine learning model trained with solubility data for peptides. If the peptide is predicted to be soluble and synthesizable, a manufacturability score for the peptide can be determined. The manufacturability score can correspond to or be indicative of a chance of successfully manufacturing the peptide.
Disclosed are various embodiments for mobility between radio-based networks operated by communication service providers and private radio-based networks. In one embodiment, first data is sent or received via a private radio-based network using a wireless network connection in a user equipment (UE) device. It is determined to switch the wireless network connection in the UE device from the private radio-based network to a communication service provider (CSP)-operated radio-based network. The private radio-based network and the CSP-operated radio-based network utilize a cellular network standard. The wireless network connection switches from the private radio-based network to the CSP-operated radio-based network. Second data is sent or received via the CSP-operated radio-based network using the wireless network connection in the UE device.
This disclosure relates to methods of non-invasively measuring the degree of cellular energy metabolism. The methods disclosed herein comprise applying monitoring devices to a bodily surface or solution containing cells (e.g., a dermal surface of a subject), generating a radio frequency, detecting operating values of the circuitry of the device, maintaining contact between the monitoring device and the bodily surface or solution containing cells for a desired length of time, and collecting data that corresponds to the degree of cellular energy metabolism. Methods of this disclosure can be used to correlate the degree of cellular energy metabolism to other physiological or cellular phenomena (e.g., cellular respiration, cell viability).
To identify sets of pixels in a first image that correspond to different objects or a background, a first image is provided to a Generative Adversarial Network (GAN). The GAN determines alternate images that retain the structural characteristics of the first image, such as the locations and shapes of objects, while modifying style characteristics, such as the colors of pixels. The images generated by the GAN may then be analyzed, such as by using a k-means clustering algorithm, to determine sets of pixels at the same location that change color in a similar manner across the set of images. A set of pixels that changes in a similar manner across the images generated by the GAN may be used as a mask representing an object or background to enable modification of the image without interfering with other objects.
An automatic technique is disclosed to enrich presented answers by highlighting relevant shopping recommendations. The shopping recommendations can either be highlighted within the answer itself, or as an auxiliary list of suggestions. A model is described for selecting phrases from the answer text (sequences of consecutive terms called noun phrases) that refer to potential products that likely represent relevant shopping recommendation in context of the question-answer pair. The noun phrases are then ranked in order of importance. The top-ranked noun phrases are used to search products to be displayed in association with the noun phrases. Clicking or tapping on a highlighted noun phrase launches a shopping-related flow, such as presenting a widget with product recommendations or running a search in a search engine.
SOFTWARE-DEFINED MULTI-NETWORK-SEGMENT GATEWAYS FOR SCALABLE ROUTING OF TRAFFIC BETWEEN CUSTOMER-PREMISE NETWORK SEGMENTS AND CLOUD-BASED VIRTUAL NETWORKS
During a communication session established with a customer-premise routing information source, a route signaling node of a multi-network-segment gateway of a cloud provider network obtains respective sets of labeled routing information pertaining to multiple customer-side network segments of a customer. The route signaling node propagates the routing information to data plane nodes of the gateway. The data plane nodes utilize the routing information to forward data packets to destinations associated with particular customer-side network segments.
H04L 45/50 - Routing or path finding of packets in data switching networks using label swapping, e.g. multi-protocol label switch [MPLS]
H04L 45/586 - Association of routers of virtual routers
H04L 45/64 - Routing or path finding of packets in data switching networks using an overlay routing layer
H04L 45/645 - Splitting route computation layer and forwarding layer, e.g. routing according to path computational element [PCE] or based on OpenFlow functionality
H04L 45/655 - Interaction between route computation entities and forwarding entities, e.g. for route determination or for flow table update
H04L 45/76 - Routing in software-defined topologies, e.g. routing between virtual machines
Systems and method for implementing quantum circuit compilation as-a-service are disclosed. In some embodiments, a quantum circuit compilation service is configured to compile quantum circuits for a plurality of third-party customers, wherein the compilation service supports compiling quantum circuits to be executed on a plurality of different quantum processing units that utilize various different quantum computing technologies. In some embodiments, the quantum computing service generates a customized compilation job plan for each quantum circuit to be compiled. The compilation job plan may reference modular compilation passes stored in a repository of the quantum circuit compilation service. The modular passes may be mixed and matched as needed to allow for compilation of a wide-variety of quantum circuits to be executed using various different quantum computing technologies.
G06N 10/80 - Quantum programming, e.g. interfaces, languages or software-development kits for creating or handling programs capable of running on quantum computersPlatforms for simulating or accessing quantum computers, e.g. cloud-based quantum computing
G06N 10/20 - Models of quantum computing, e.g. quantum circuits or universal quantum computers
G06N 10/40 - Physical realisations or architectures of quantum processors or components for manipulating qubits, e.g. qubit coupling or qubit control
A vehicle signal relay system enables a relay agent in a first zone of a vehicle to send sensor signals having a first link-layer communication protocol to a software application deployed on a compute unit in another zone of the vehicle that is connected using another link-layer communication protocol. The vehicle signal relay system allows the software application to identify target relay agents with access to needed sensor signals. The vehicle signal relay system may further enable one way or mutual attestation. The vehicle signal relay system may also allow filters to be applied to the subscribed vehicle sensor signals, and may allow the software application to determine a communication protocol to be used between the software application and the relay agent.
H04L 67/12 - Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
H04L 67/565 - Conversion or adaptation of application format or content
H04L 69/08 - Protocols for interworkingProtocol conversion
H04L 69/324 - Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the data link layer [OSI layer 2], e.g. HDLC
47.
GENERATING IMAGES OF SYNTHESIZED BODIES WEARING A GARMENT
Systems and methods are described for generating images of synthesized bodies wearing a garment. For instance, a source image of a human or mannequin wearing a garment may be submitted to a synthesized human generation system. In response to receiving the source image, the synthesized human generation system may use a classifier to classify the image as depicting one or more body types or orientations. The synthesized human generation system may also apply segmentation to the source image to segment the garment pixels. The synthesized human generation system may then select one or more body generation machine learning models based on the classification of the source image. The synthesized human generation system may utilize the selected machine learning models to generate one or more output images of synthesized bodies that appear to be wearing the garment, using the segmented garment as input.
Systems and methods for performing medical audio summarizing for medical conversations are disclosed. An audio file and meta data for a medical conversation are provided to a medical audio summarization system. A transcription machine learning model is used by the medical audio summarization system to generate a transcript and a natural language processing service of the medical audio summarization system is used to generate a summary of the transcript. The natural language processing service may include at least four machine learning models that identify medical entities in the transcript, identify speaker roles in the transcript, determine sections of the transcript corresponding to the summary, and extract or abstract phrases for the summary. The identified medical entities and speaker roles, determined sections, and extracted or abstracted phrases may then be used to generate the summary.
G16H 10/60 - ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
Systems and methods for providing a seamless automatic repeat request (ARQ) stream are provided. The system can include a plurality of streaming servers, a load balancer, and an ARQ streaming service. The ARQ streaming service obtains encoded content segments and transmits the segments to a plurality of streaming servers. The plurality of streaming servers is configured to transmit the encoded content segments to a client computing device, and a load balancer is implemented between the client computing device and the plurality of streaming servers. When the client computing device sends a response message to the ARQ streaming service while receiving the encoded content segments from one of the plurality of streaming servers, the ARQ streaming service may identify a failure in the streaming server. The ARQ streaming service retransmits lost encoded content segments by switching the streaming path to another streaming server from the plurality of streaming servers.
H04L 67/1004 - Server selection for load balancing
H04L 69/40 - Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
A vehicle software test environment management system provides a virtual vehicle environment that includes virtual electronic control units (vECUs) having a virtual bus connectivity configuration used to simulate respective ones of electronic control units (ECUs) of a real-world vehicle. The vehicle software test environment management system determines respective instance types of one or more virtual compute instances to be used to implement the vECUs based on respective configuration of respective ones of the ECUs and further determines respective machine images to emulate respective software environments of the respective ones of the ECUs. The vehicle software test environment management system may also deploy a vehicle software application to be certified on one or more of the vECUs and test the deployed vehicle software application using recorded signals of one or more ECUs of the real-world vehicle.
Techniques are described for executing satisfiability modulo theories (SMT) solvers in a "shadow" system configuration where input queries are provided to a primary SMT solver system and additionally to one or more secondary SMT solver systems. SMT solver systems can be used by cloud providers and in other computing environments to analyze the implications of configured user account policies defining permissions with respect to users' computing resources and associated actions within a computing environment, to help ensure the security of computing resources and user data, etc. The results generated by a primary SMT solver system can be provided to one or more secondary SMT solver systems, where each of the secondary SMT systems can comprise different system components or different versions of system components, to assess the correctness of the primary SMT solver system, to compare performance metrics, among other possible types of analyses.
Described techniques and systems can identify a request to transfer one or more computer-implemented resources associated with a first computer-implemented account to a second computer-implemented account, the one or more computer-implemented resources at least in part managed through a service accessible by at least one entity selected to consider the request, the service implemented separately from another service to manage access to the one or more computer-implemented resources. Also, the techniques and systems can confirm the at least one entity approved the request to transfer the one or more computer-implemented resources associated with the first computer-implemented account, and transfer the one or more computer-implemented resources to the second computer-implemented account.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
Synchronous replication for a distributed database system may be performed using an erasure coding scheme. A request that causes a write to a database hosted in a distributed database system is received. A replication message for a synchronous replication technique is generated, then divided and encoded into a number of chunks according to an erasure encoding scheme that allows the replication message to be reassembled with less than the number of chunks. The chunks are sent to another instance of the database which receives and reassembles the replication message from the chunks and responds to acknowledge that the write is committed.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
54.
SYSTEMS FOR DETERMINING ITEM FEATURES BASED ON REVIEW DATA
An interface that presents features for a category of items that are liked and disliked by customers is created using information presented in customer reviews of items that belong to the category. Terms and phrases that correspond to particular features of items are used to determine a number or percentage of reviews that reference these features in a positive or negative manner. This information may be presented in an interface, for use by a product manager or designer, or alongside one or more portions of a review that references the particular features. Item ratings included with the reviews may be used to determine an effect on the rating of an item associated with the presence or absence of particular features. The effect on the rating may be presented in the same or a separate interface to provide insight regarding suitable features for new product development.
Devices and techniques are generally described for sensor-based privacy protection for devices. In some examples, a first machine learning model and first data generated by the accelerometer may be used to determine that the first data corresponds to a predefined motion profile. In various examples, a first location associated with the electronic device may be determined. In some further examples, the wireless transmitter may transmit second data indicating the first location based on the determining that the first data corresponds to quadruped movement.
Disclosed are various embodiments for dynamic spectrum management for portable radio-based networks. In one embodiment, it is determined that a portable radio unit implementing a cell of a radio-based network has moved away from a reference location associated with a first spectrum allocation used by the portable radio unit. A second spectrum allocation for the portable radio unit is obtained based at least in part on a direction of movement of the portable radio unit. The portable radio unit is then configured to use the second spectrum allocation instead of the first spectrum allocation.
Techniques for intelligent multi-carrier network edge application deployment are described. Traffic that is destined for an application implemented in multiple edge locations of a cloud provider network is originated by a mobile user equipment device via use of a communications network of a first communications service provider (CSP). An edge location hosting the application, from multiple such candidates, can be selected as a destination for the traffic. The edge location may be deployed in a facility of a different CSP. The traffic can be sent into the edge location using a network address of the different CSP to securely allow for its entry thereto.
Techniques for recognizing phonemes from a spoken input and providing pronunciation feedback as part of a language learning experience are described. Some embodiments use a machine learning model configured to recognize phonemes spoken in a user's native language and spoken in the language to be learned. The model is trained with the native language's lexicon and the learning language's lexicon. The system can provide feedback at a word level, a syllable level and/or phoneme level. The system can also provide feedback with respect to phoneme stress.
G10L 15/02 - Feature extraction for speech recognitionSelection of recognition unit
G10L 13/08 - Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
G10L 13/10 - Prosody rules derived from textStress or intonation
An expressive speech translation system may process source speech in a source language and output synthesized speech in a target language while retaining vocal performance characteristics such as intonation, emphasis, rhythm, style, and/or emotion. The system may receive a transcript of the source speech, translate it, and generate transcript data. To generate the synthesized speech, the system may process the transcript data with a language embedding representing language-dependent speech characteristics of the target language, a speaker embedding representing speaker-dependent voice identity characteristics of a speaker, and a performance embedding representing the vocal performance characteristics of the source speech. The system may control the duration of segments of the synthesized speech to better align with corresponding segments of the source speech for the purpose of dubbing multimedia content with synthesized speech in a language different from that of the original audio.
User enrollment to a biometric identification system begins with a pre-enrollment process on selected general input devices (GID) such as smartphones. The user may enter identification data such as their name and use a camera of the GID to acquire first image data, such as of their hand. The first image data is processed to determine a first representation. Upon presentation of a hand at a biometric input device, second image data is acquired. The second image data is processed to determine a second representation. If the second representation is deemed to be associated with the first representation, the enrollment process may be completed by storing the second representation for subsequent use.
The present disclosure relates to methods, apparatus, systems, and non-transitory computer-readable storage media for training and using a multi-scale machine learning model for the enhancement of compressed video.
H04N 19/17 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/186 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
H04N 19/46 - Embedding additional information in the video signal during the compression process
H04N 19/59 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
H04N 19/82 - Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
H04N 19/85 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
62.
UN-LEARNING OF TRAINING DATA FOR MACHINE LEARNING MODELS
Methods and systems are disclosed for a machine learning (ML) model training system that can remove the influence of specific data points in an efficient way. An ML training system can train multiple instances of a machine learning model on disjoint shards of data. Upon receiving a request to remove a specific data point, the ML training system can expunge the data point from its corresponding shard and only retrain the model instance for that specific shard. Each shard can be further divided into data slices, with each slice containing a portion of the data from the shard. During the training of each instance of the machine learning model, the ML training system can save model checkpoints after completion of training for each slice. Upon receiving a removal request, the related data point is removed from its respective slice, and the relevant model instance can be retrained starting from the last checkpoint before that slice had been previously used for training.
Techniques for resource sharing between cloud-hosted virtual networks are described. A first network address of a first virtual network is associated with a resource connected to a second virtual network, the first and second virtual networks within a cloud provider network. A service of the cloud provider network receives a message destined for the first network address. The service translates the first network address to a second network address of the resource in the second virtual private network. The service sends the message to the resource at the second network address in the second virtual network.
A method includes generating a virtual model of a human body based at least in part on a selected image of the human body and generating a segment of an article of clothing based at least in part on a selected image of the article of clothing. The method also includes generating a layer mask indicating whether a plurality of output pixels of an output image should be produced according to the image of the human body, the image of the shirt, or the image of the pair of pants and producing the plurality of output pixels of the output image according to the layer mask. The output image shows the article of clothing on the human body in the selected image of the human body.
A system can function relative to an item, a tether, and a robotic manipulator. The tether can correspond to a loop or other structure that can be mountable or mounted in an installed state in which the tether is secured with the item to facilitate lifting the item by lifting of the tether. The robotic manipulator can include a robotic end effector engageable with the tether in the installed state. The robotic end effector can be configurable to an engaged state in which the tether is coupled with the robotic end effector. The robotic manipulator in the engaged state can be operable to move the item by lifting of the tether in the installed state.
Systems, methods, and computer-readable media are disclosed for shuttle lift systems to facilitate simultaneous movement of multiple shuttles across a multi-level storage structure that stores packages. The shuttle lift system may include the multi-level storage structure and a tower lift system at each end of the multi-level storage structure. One tower lift system may raise shuttles to a level of the multi-level storage structure. The other tower lift system may lower shuttles to a level of the multi-level storage structure and/or to a delivery area. The tower lift system may include cables (e.g., tethers) that are supported by a bottom portion and an upper portion of the tower lift system. Shelves may be attached to the cables on either side of the tower lift system and may receive and move shuttles up/down the tower lift system. The tower lift system may have high shuttle throughput and a reduced footprint.
Techniques for a dynamic radar mode modulation feature are described herein. A computer system associated with a device may implement a first radar configuration for a radar sensor. The first radar configuration may correspond to a first mode and comprise a first frame per second rate and a first difference threshold. The computer system may receive first data from the radar sensor in the first radar configuration. The computer system may determine a presence of an object within a field of view of the radar sensor based on the first data and the first difference threshold. The computer system may instruct the device to turn on based on determining the presence of the user. The radar sensor may be instructed to implement a second radar configuration associated with a second mode.
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/443 - OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
68.
DYNAMIC BANDWIDTH TRANSPORT LINKS FOR RADIO-BASED NETWORKS
Disclosed are various embodiments for dynamic bandwidth allocations for cell site transport links on a radio-based network. In one embodiment, a predicted bandwidth usage at a cell site of a radio-based network is determined. A bandwidth allocation on a data link between the cell site and a data center is dynamically adjusted based at least in part on the predicted bandwidth usage.
H04L 47/76 - Admission controlResource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions
Systems and methods for enterprise type pretrained models for voice interfaces include the generation and validation of enterprise type pretrained models utilizing input associated with the enterprise type at issue. Once generated and validated, when a user command is received, the speech processing system may check to detennine if a customized model is available, and if not, may query the enterprise type model to provide a response to the user command.
Techniques are described for providing a SAT-based solver for a quantifier-free theory of strings and bit vectors. The solver can be used by an automated reasoning service of a cloud provider network to analyze policies and the consequences of policies. The solver reduces an input formula to a Boolean satisfiability problem by encoding the input formula into an equisatisfiable propositional formula, where the satisfiability of the equisatisfiable propositional formula is determined by a SAT solver. Rather than using a traditional DPLL(T) style algorithm, the solver described herein bounds the length of variables in an input formula and reduces the problem to a single formula, which can then be solved using incremental SAT solving. The solver can be used independently or as part of a portfolio of solvers used to determine the satisfiability or unsatisfiability of certain formula corresponding, e.g., to questions about users' policies within a cloud provider network.
Attachment of a pluggable module to an externally-accessible slot of a base unit of a server is detected. The module is configured to execute a first network function of a radio-based communication network. In response to a determination that the module satisfies a security criterion, a second network function is launched. The second network function performs one or more computations on output of the first network function. The output of the first network function is generated at the module in response to a message from a user equipment device of a radio-based communication network.
A system comprising one or more computing devices implements a vehicle software deployment management system. The vehicle software deployment management system enables clients to send signed serialized data chunks of a vehicle software application and a deployment plan for the software application to vehicles using a protocol agnostic transmission format. The vehicle software deployment management system may generate a deployment plan that may be processed by an in-vehicle application deployment planner/orchestrator of the vehicle to deploy the particular vehicle software application. The vehicle software deployment management system may send the vehicle software application using containers to be used by ECU agents of various ECUs of the vehicle. Furthermore, the vehicle software deployment management system may utilize received vehicle information to dynamically generate one or more updated vehicle deployment plans to send to respective vehicles.
Time and value ordering may be applied for items stored in data backups. A change log that persists changes to a data set may be updated with changes and used to update an in-memory table for the data set, which describes changes to items up to a current time. An event may be detected to seal the in-memory table from subsequent updates and a persistent data object that orders the items in the in-memory according to both keys of the respective items and the respective time values of the items, as stored in the change log, may be generated and stored as part of a backup for the data set.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
Prompt development techniques are implemented for tuning natural language processing machine learning models using selected prompts from a prompt task collection. A prompt development system may support requests to further adapt a pre-trained natural language processing machine learning model to tune the pre-trained natural language processing machine learning model for use with a selected prompt. Evaluation of the tuned natural language processing machine learning model may be performed and provided as a result.
In various examples, systems and methods of wireless communication link adaptation are described. In some examples, first data may be determined for a first gateway device, the first data representing a plurality of received signal strength (RSS) values of a first signal received from a first end node device. Second data representing an interference associated with the first signal may be determined. At least one of a first packet error rate or a first packet success rate may be determined based at least in part on the first data and the second data. A first modulation coding scheme (MCS) associated with at least one of the first packet error rate or the first packet success rate may be determined. Third data may be sent to the first end node, the third data instructing the first end node device to use the first MCS for communication with the first gateway device.
Techniques for role-based permission delegation in a provider network. The techniques include an assuming service in the provider network sending a request to a temporary credential service in the provider network to assume a delegation role. The assuming service, acting in the delegation role, sending a request to the temporary credential service to assume the customer role in accordance with a down scoping policy. The assuming service, acting in the customer role, performing an action in a strict subset of actions on a customer resource. The techniques improve the operation of the provider network by allowing a permission to perform an action on the customer resource that is granted by the customer to a delegating service in the provider network to be delegated to the assuming service while complying with the access control principle of least privilege.
A system and method for authorization policy validation. A validator takes as input an authorization policy to be analyzed and a schema that specifies entity types and their attributes, types of entity parents in an entity hierarchy, and which entity types can be used with which actions. The validator checks that the policy conforms to the schema. If the check passes, then the policy is guaranteed to be free of both type errors and attribute access errors for any input that conforms to the schema.
Confidential tuning of pre-trained machine learning models may be provided. A request associated with a model user account to fine-tune a pre-trained machine learning model with model access restrictions may be received. The pre-trained machine learning model may be one of many pre-trained machine learning models uploaded for selection and fine-tuning. The pre-trained machine learning model may be further trained using a request specified data set, with the model access restrictions and access restrictions for the data set being enforced as part of the training. Then, the fine-tuned machine learning model may be made available for invocation by an application associated with the model user account without violating the model access restrictions and data access restrictions.
Systems and methods are provided for creating and running an instance of a dynamic access control system (DACS). Trust providers may be defined in a trust broker of the DACS such that trust information associated with the trust providers can be used to create a custom data structure (406). Resources and resource groups may be defined in the DACS. Policies may be configured or coded in the DACS to map the custom data structure to recourses or resource groups. Additionally, policies may be configured or coded in the DACS to route the data structure and request to network segments or shared with other parties.
An electric vehicle supply equipment (EVSE) holster including a locking mechanism and an image capture device is described herein. The image capture device is configured to capture image data relating to a connector face of the charging connector when the charging connector is inserted into the holster. The image data is then analyzed to determine if a physical abnormality is present on the charging connector (for example, damage to the charging connector or debris lodged in the charging connector. If a physical abnormality is detected, the locking mechanism is actuated to lock the charging connector to the holster until maintenance is performed on the charging connector.
Systems and methods are used to detect underlying themes from a collection of documents at an aggregated level. A representative set of documents may be selected from a cluster of documents, with the representative set of documents corresponding to a general theme of the cluster. Candidate theme phrases may then be extracted from the documents and used to generate document embeddings and candidate phrase embeddings, which may be ranked, such as with a diversity-based ranking approach. Certain candidates may be selected from the ranking. Each of the documents forming the representative set may then be concatenated and a query embedding may be generated and ranked against the candidate phrases. In this manner, a collection of phrases associated with both the general underlying theme of the cluster, along with granular topics associated with that theme, may be identified.
A biometric identification system processes input data acquired by input devices to determine embeddings used to identify a user. Different types of input devices or hardware configurations of input devices may produce different output. Each hardware configuration may be associated with respective representation data. A set of transformer networks are used to transform an embedding from one representation data associated with a first type of device or hardware configuration to another. This enables user participation via different configurations of hardware without requiring users to re-enroll for different input devices or hardware configurations. Opportunistic updates are made to the embeddings as embeddings native to a particular configuration of hardware are acquired from the user.
G06V 10/143 - Sensing or illuminating at different wavelengths
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
This disclosure describes a verification service within a service provider network for automatically verifying and validating documents. A user may upload a document image to the verification service. A pre-processing service may pre-process the document image. The pre-processed document image may then be forwarded to a first machine learning ML model for similarity evaluation. Once the first ML model has completed its evaluation of the document image, the first ML model may forward the document image to a second ML model for symbol recognition, which may then forward the business license to an optical recognition (OCR) service for OCR validation. If the document image is validated, e.g., is an image of a purported document type, as will be discussed further herein, the publishing service may pre-populate, e.g., publish, information from the document image to an account template.
Techniques for enabling access in a multi-assistant speech processing system are described, where a first assistant system may use components of a second assistant system as data processing components. Runtime operational data and user input data related to the first assistant may be kept separate from the processing data and input data related to the second assistant by propagating a first account ID, for user inputs directed to the first assistant, through the processing pipeline, and using a second account for user inputs directed to the second assistant. A mapping between the first account ID and the second account ID may be accessible to a select number of system components. Handoffs between the two assistants are handled in a manner where data related to one assistant is not accessible by the other assistant.
An Application Programming Interface (API) allows a launching of a virtual machine where a queue count can be configured by a user. More specifically, each virtual machine can be assigned a pool of queues. Additionally, each virtual machine can have multiple virtual networking interfaces and a user can assign a number of queues from the pool to each virtual networking interface. Thus, a new metadata field is described that can be used with requests to launch a virtual machine. The metadata field includes one or more parameters that associate a number of queues with each virtual networking interface. A queue count can be dynamically configured by a user to ensure that the queues are efficiently used given that the user understands the intended application of the virtual machine being launched.
Systems and methods are provided for managing provision of—and access to—data sets among instances of function code executing in an on-demand manner. An API is provided by which functions can store data sets to be shared with other functions, and by which functions can access data sets shared by other functions.
Systems, methods, and devices are disclosed for front-lit displays having uniform brightness. In one embodiment, an example display may include an electrophoretic display, a light guide configured to direct light from one or more light emitting diodes, and a cover lens assembly. The cover lens assembly may include a cover glass layer, an anti-glare film coupled to the cover glass layer, and a hot melt adhesive disposed about lateral edge surfaces of the cover glass layer and the anti-glare film, such that the hot melt adhesive forms a perimeter of the cover lens assembly.
G02F 1/167 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulatingNon-linear optics for the control of the intensity, phase, polarisation or colour based on translational movement of particles in a fluid under the influence of an applied field characterised by the electro-optical or magneto-optical effect by electrophoresis
Techniques for customer-initiated virtual machine resource allocation sharing are described. A hardware virtualization service of a cloud provider network receives a request to launch a first virtual machine, wherein the first virtual machine is of a first virtual machine type, the first virtual machine type having a resource amount allocated to virtual machines of the first virtual machine type. The hardware virtualization service causes a launch of the first virtual machine on a host computer system of the cloud provider network. The host computer system shares an allocation of the resource amount from a corresponding resource of the host computer system between the first virtual machine and a second virtual machine, wherein the second virtual machine is of the first virtual machine type.
Systems and methods are described for implementing a distributed unit in a radio access network that executes code on behalf of mobile devices. A distributed unit may be implemented on an edge server that is in close physical proximity to a radio unit, with few or no intervening devices. The edge server may thus provide services to mobile devices, such as executing code on behalf of a mobile device in an execution environment on the edge server, at significantly lower latency than more distant cloud-based servers. The edge server may preload computing environments with code for which a mobile device is likely to request execution (e.g., because a particular application is executing on the mobile device), and may determine whether to execute code on the edge server or on a cloud provider network.
Systems and techniques are disclosed for predicting the structural status of an object. An object model, such as a machine learning model, can be trained on sample sensor data indicating vibrations, movements, and/or other reactions of objects with known desired and undesired structural statuses to a stimulus agent, such as a puff of air. A scanning device can output a corresponding stimulus agent towards an object, capture sensor data indicating the reaction of the object to the stimulus agent, and provide the sensor data to the trained object model. Based on the sensor data indicating how the object reacted to the stimulus agent, the object model can predict whether the object has a desired structural status or an undesired structural status.
Systems and methods are provided for translation of text in an image, and presentation of a version of the image in which the translated text is displayed a manner consistent with the original image. Text segments are automatically translated from their original source language to a target language. In order to provide presentation of the translated text in a manner that closely matches the source text, various display attributes of the source text are detected (e.g., font size, font color, font style, etc.).
A system and method for continual learning in a provider network. The method is configured to implement or interface with a system which implements a semi-automated or fully automated architecture of continual machine learning, the semi-automated or fully automated architecture implementing user-configurable model retraining or hyperparameter tuning, which is enabled by a provider network. This functions to adapt a model over time to new information in the training data while also providing a user-friendly, flexible, and customizable continual learning process.
Techniques are described for providing a policy refiner application to analyze and recommend modifications to identity and access management policies created by users of a cloud provider network (e.g., to move the policies toward least-privilege permissions). A policy refiner application receives as input a policy to analyze, and a log of events related to activity associated with one or more accounts of a cloud provider network. The policy refiner application can identify, from the log of events, actions that were permitted based on particular statements contained in the policy. Based on field values contained in the corresponding events, the policy refiner application generates an abstraction of the field values, where the abstraction of the field values may represent a more restrictive version of the field from a policy perspective. These abstractions can be presented to users as recommendations for modifying their policy to reduce the privileges granted by the policy.
Systems and methods for implementing record locking for transactions using a probabilistic data structure are described. This probabilistic structure enables adding of data records without growth of the data structure. The data structure includes a hash table for each of multiple hash functions, where entries in the respective hash tables store a transaction time and locking state. To lock a record, each hash function is applied to a record key to provide an index into a respective hash table and a minimum of the values stored in the hash tables is retrieved. If the retrieved value is less than a transaction time for a transaction attempting to lock the record, locking is permitted and the transaction time is recorded to each of the hash tables. To commit the transaction, the probabilistic data structure is atomically updated as part of the commit operation.
A system for providing code suggestions according to licensing criteria is described. The system comprises computing devices that implement a code suggestion service. The code suggestion service receives a request that specifies licensing criteria via an interface of the code suggestion service. The code suggestion service determines respective licenses for respective source code files according to a source code attribution database from parsing the plurality of source code files that are applicable to the plurality of source code files. The code suggestion service generates a set of candidate code suggestions based, at least in part, on the plurality of source code files. The code suggestion service determines code suggestions from the set of candidate code suggestions that satisfy the licensing criteria based on the respective licenses. The code suggestion service provides the code suggestions determined from the set of candidate source code files that satisfy the licensing criteria.
A distributed database identifies classifications of risk associated with stages of a query plan. The distributed database generates an execution plan in which incompatible risk classifications are assigned to separate stages of an execution plan that is derived from the query plan. The stages are assigned to computing nodes for execution based, at least in part, on the risk classifications. A result for the query is generated based on execution of the stages on the assigned computing nodes.
Disclosed are various embodiments for seamless insertion of modified media content. In one embodiment, a modified portion of video content is received. The modified portion has a start cue point and an end cue point that are set relative to a modification to the video content to indicate respectively when the modification approximately begins and ends compared to the video content. A video coding associated with the video content is identified. The start cue point and/or the end cue point are dynamically adjusted to align the modified portion with the video content based at least in part on the video coding.
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/242 - Synchronization processes, e.g. processing of PCR [Program Clock References]
H04N 21/258 - Client or end-user data management, e.g. managing client capabilities, user preferences or demographics or processing of multiple end-users preferences to derive collaborative data
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
H04N 21/8543 - Content authoring using a description language, e.g. MHEG [Multimedia and Hypermedia information coding Expert Group] or XML [eXtensible Markup Language]
98.
MULTI-DOMAIN CONFIGURABLE DATA COMPRESSOR/DE-COMPRESSOR
A data service implements a configurable data compressor/decompressor using a recipe generated for a particular data set type and using compression operators of a common registry (e.g., pantry) that are referenced by the recipe, wherein the recipe indicates at which nodes of a compression graph respective ones of the compression operators of the registry are to be implemented. The configurable data compressor/decompressor provides a customizable framework for compressing data sets of different types (e.g., belonging to different data domains) using a common compressor/decompressor implemented using a common set of compression operators.
A multitenant solver execution service provides managed infrastructure for defining and solving large-scale optimization problems. In embodiments, the service executes solver jobs on managed compute resources such as virtual machines or containers. The compute resources can be automatically scaled up or down based on client demand and are assigned to solver jobs in a serverless manner. Solver jobs can be initiated based on configured triggers. In embodiments, the service allows users to select from different types of solvers, mix different solvers in a solver job, and translate a model from one solver to another solver. In embodiments, the service provides developer interfaces to, for example, run solver experiments, recommend solver types or solver settings, and suggest model templates. The solver execution service relieves developers from having to manage infrastructure for running optimization solvers and allows developers to easily work with different types of solvers via a unified interface.
Disclosed are various embodiments for a distributed and synchronized core in a radio-based network. In one embodiment, a first radio access network (RAN)-enabled edge server at a first edge location is configured to perform a set of distributed unit (DU) functions for a radio-based network. The first RAN-enabled edge server is also configured to perform a set of core network functions and a set of centralized unit (CU) functions for the radio-based network. State associated with the set of core network functions and the set of CU functions is synchronized between the first RAN-enabled edge server and another server.