A system for determining personalized execution times for delivery of optimized content is disclosed. The system may evaluate activities of a user, occurring within a network, during hours of days. The system may determine, based on the evaluated activities, optimal time periods during a time interval in which to schedule provision of content items to a communication device(s) associated with the user. The system may determine a best time period among the optimal time periods. The system may generate, in advance of the optimal time periods, content items tailored to the user. The system may determine, in advance of the best time period, a highest ranked generated content item, among the generated content items, for delivery to the communication device(s) associated with the user during the best time period. The system may enable provision, to the communication device(s), of the highest ranked generated content item during the best time period.
An online system receives explicit user data and explicit event data, and implicit user data and implicit event data from a third party system. The online system generates an implicit users/implicit events data feature, an explicit users/explicit events data feature, and an explicit users/implicit events data feature. The online system generates a prediction of the counterfactual rate based on the implicit users/implicit events data feature, the explicit users/explicit events data feature, and the explicit users/explicit events data feature, the counterfactual rate indicating the likelihood that target users matching certain characteristics caused an event to occur when the target are not been presented with content by the online system, the content configured to induce users to cause the event to occur. A combined prediction rate is presented to the third party system based on the counterfactual rate.
Systems and methods for social media video streaming may include (1) presenting, within a short-form video being streamed as part of a short-form video feed, a selectable theme element, indicating a theme associated with the short-form video, (2) receiving user input selecting the selectable theme element, and (3) in response to receiving the user input, presenting a digest interface of short-form videos, relating to the theme, which includes (i) sets of tiles, in which each tile corresponds to a different short-form video, organized into subthemes, and (ii) a digital social engagement element, which provides an entry point for digital social engagement relating to the theme.
In one embodiment, a method includes receiving, from a client system associated with a user, a first user request that includes a reference to a target object and one or more of an attribute or a relationship of the target object. Visual data including one or more images portraying the target object may then be accessed, and the reference may be resolved to the target object portrayed in the one or more images. Object information of the target object that corresponds to the referenced attribute or relationship of the first user request may be determined based on a visual analysis of the one or more images. Finally, responsive to receiving the first user request, the object information of the target object may be stored in a multimodal dialog state.
In some embodiments, a computer-implemented method includes obtaining a without-eyeglasses face scan of a subject, the without-eyeglasses face scan being a three-dimensional (3D) model of a face of the subject without eyeglasses; obtaining a with-eyeglasses face scan of the subject, the with-eyeglasses face scan being a 3D model of the subject with eyeglasses; and using the without-eyeglasses face scan and the with-eyeglasses face scan to place a 3D eyeglasses model on a face model of the subject. In some embodiments of the computer-implemented method, the 3D eyeglasses model is placed on the face model of the subject using frame placement information generated using the without-eyeglasses face scan and the with-eyeglasses face scan.
Methods, apparatuses, and systems for using a compensating window to correct tolerance-placement effects on camera focus are provided. The system may receive a first captured image of a first test target from a surface of a target plane. The first captured image may be captured using a first lens of a camera. The system may determine a first modulation transfer function measurement for the first captured image. The system may determine that the first modulation transfer function measurement is within a threshold measurement. The system may send an alert indicative that the first lens is within the threshold measurement.
H04N 25/615 - Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4" involving a transfer function modelling the optical system, e.g. optical transfer function [OTF], phase transfer function [PhTF] or modulation transfer function [MTF]
Technology for customized crest factor reduction (CFR) noise shaping includes dividing a frequency band into a plurality of regions, assigning a constellation goal for each region, the respective constellation goal for at least two regions being different, determining a CFR noise level for each region based on the constellation goal for the region and a target CFR noise level for the divided frequency band, creating a cancellation pulse based on scaling factors, and based on the cancellation pulse, applying a cancellation pulse signal on a per-region basis to generate transmission signals having the determined CFR noise level for each region. In examples, a first region has a first constellation goal and a second region has a second constellation goal, and a determined CFR noise level for the first region supports the first constellation goal and a determined CFR noise level for the second region supports the second constellation goal.
H04L 27/34 - Amplitude- and phase-modulated carrier systems, e.g. quadrature-amplitude modulated carrier systems
10.
Methods, apparatuses and computer program products for providing transmission chirped volume bragg grating based compact waveguide in-couplers for light sources
Methods and systems for propagating light into a waveguide are provided. The system may include a light source configured to generate light. The system may include at least one mirror configured to direct the light into one or more rays of light. The system may include a Surface Relief Grating disposed on a Volume Bragg Grating. The Surface Relief Grating may receive the one or more rays of light and may diffract the one or more rays of light. The Volume Bragg Grating may be disposed on the waveguide in which the waveguide may be configured to receive the one or more rays of light from the Volume Bragg Grating and propagate the one or more rays of light throughout the waveguide such that an off-Bragg condition is exhibited by the one or more rays of light propagating through the waveguide.
The disclosed computer-implemented method may include systems for incorporating a user's avatar into a real-time communication session. For example, the described systems establish a real-time communication session between two or more social networking system users. The described systems further generate a landmark map representing positioning of one of the real-time communication session participants, and transmit the landmark map with the participant's avatar to one or more recipients. On the recipient-side, the described systems render the transmitted avatar according to the landmark map. Various other methods, systems, and computer-readable media are also disclosed.
A head-mounted display including multiple displays tiled together is provided. A first display may include an active region and an inactive region. A second display may be aligned with the inactive region. The first display and the second display may have different resolutions and may present visual information with different display qualities based on their respective resolutions. For example, the first display may include a lower resolution than the second display. The inactive region may be centered in the first display, and the second display may be centered with respect to the first display. The head-mounted display may provide a continuous image(s) (or video(s)) with a high resolution on the second display, while providing the remainder of the image(s) on the low resolution first display. Based on the multiple displays being tiled, the multiple displays may present one or more images, or one or more videos, in a non-overlapping manner.
G09G 3/3208 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix using controlled light sources using electroluminescent panels semiconductive, e.g. using light-emitting diodes [LED] organic, e.g. using organic light-emitting diodes [OLED]
13.
CHIP-TO-CHIP INTERCONNECT WITH A LAYERED COMMUNICATION ARCHITECTURE
A system includes a first integrated circuit package including a first group of one or more artificial intelligence processing units and a first chip-to-chip interconnect communication unit and a second integrated circuit package including a second group of one or more artificial intelligence processing units and a second chip-to-chip interconnect communication unit. The system also includes an interconnect between the first integrated circuit package and the second integrated circuit package, wherein the first chip-to-chip interconnect communication unit and the second chip-to-chip interconnect communication unit manage Ethernet-based communication via the interconnect using a layered communication architecture supporting a credit-based data flow control and a retransmission data flow control.
In one embodiment, a method includes receiving a first user request from a first user for generating a media montage from a client system during a dialog session with the first user, generating an initial media montage during the dialog session based on media collections associated with the first user, sending instructions for presenting the initial media montage to the client system during the dialog session, receiving a second user request from the first user from the client system during the dialog session for editing the initial media montage, generating an edited media montage from the initial media montage during the dialog session based on the second user request and a memory graph associated with the first user, and sending instructions for presenting the edited media montage to the client system during the dialog session.
The disclosed computer-implemented method may include systems and methods for embedding specific data into a call stack associated with an application session. For example, the systems and methods described herein can initialize a program thread that sequentially executes specialized application functions based on characters of a unique identifier to embed the unique identifier within a call stack of the application session. The systems and methods further provide the unique identifier in connection with other data sources associated with the application session such that further analysis of all data associated with the application session may be cross-referenced according to the unique identifier. Various other methods, systems, and computer-readable media are also disclosed.
The disclosed systems and methods may include (1) providing a selectable dashboard element within an interface of a dating application configured for a user of the dating application, (2) receiving user input selecting the dashboard element, and (3) in response to receiving the user input, presenting a digital dating-analytics dashboard that provides information relating to the user's dating history aggregated via the dating application. Various other methods, systems, and computer-readable media are also disclosed.
The disclosed systems and methods may include (1) receiving, from a user who has a dating account with a dating application maintained by a social media platform, user input designating a contact of the user, connected to the user via the social media platform, as a matchmaker for the dating account and (2) in response to receiving the user input designating the contact as a matchmaker for the dating account, providing the contact with access to a recommended-matches interface of the dating application, comprising a queue of recommended dating matches selected for the user, in a matchmaker mode that enables the contact to perform one or more digital matchmaking actions. Various other methods, systems, and computer-readable media are also disclosed.
In one embodiment, a method includes accessing a decoded hypothesis corresponding to an utterance, computing a predicted probability of observing each token in the decoded hypothesis by having a local first machine-learning model process the decoded hypothesis, computing a confidence score for each token in the decoded hypothesis by having a second machine-learning model process the decoded hypothesis, where the confidence score indicates a degree of confidence for the token to be observed at its position, calculating a loss for the computed predicted probabilities of observing tokens in the decoded hypothesis based on the computed confidence scores, and updating parameters of the local first machine-learning model based on the calculated loss.
An image processing system enables a user wearing a head-mounted display to experience a virtual environment combined with a representation of a real-world object. The image processing system receives a captured scene of a real-world environment that includes a target object. The image processing system identifies the target object in the captured scene and generates a representation of the target object. In some cases, the image processing system may include a graphical overlay with the representation of the target object. The image processing system can generate a combined scene that includes the target object and the virtual environment. The combined scene is presented to the user, thereby allowing the user to interact with the real-world target object (or a representation thereof) in combination with the virtual environment.
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
G06T 19/00 - Manipulating 3D models or images for computer graphics
21.
Display screen or portion thereof with a graphical user interface
Systems, apparatuses and methods provide technology that compresses first data based on a first compression scheme to generate second data, where the first data is associated with a first machine learning model. The technology stores the second data into a memory, adjusts a first entry of a lookup table to correspond to the first compression scheme based on the first data being compressed based on the first compression scheme, provide the second data from the memory to processing elements of a processing array during execution of the first machine learning model, and decompresses, at the processing array, the second data based on the lookup table to obtain the first data.
H03M 7/42 - Conversion to or from variable length codes, e.g. Shannon-Fano code, Huffman code, Morse code using table look-up for the coding or decoding process, e.g. using read-only memory
24.
Display screen with an animated graphical user interface
Various aspects of the subject technology relate to systems, methods, and machine-readable media for bundling data into media files. The disclosed method comprises receiving a player video file, wherein the player video file is generated on a social media platform. The method includes identifying a storage section of the player video file configured to store an editable video file. The method includes generating the editable video file comprising metadata that includes a plurality of parameters associated with the player video file. A parameter of the plurality of parameters identifies an editable element of the player video file. The method includes generating a bundled video file by appending the editable video file in the storage section of the player video file. The method comprises exporting the bundled video file to storage located external to the social media platform.
H04N 21/231 - Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers or prioritizing data for deletion
G06Q 50/00 - Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
26.
PRECISION TIME PROTOCOL REDUNDANCY AND LOAD BALANCING ACROSS A SET OF ACTIVE TIME SERVERS
A system for achieving Precision Time Protocol (PTP) redundancy and load balancing across a set of active time servers deployed in a cluster is disclosed. The system may organize a set of PTP Grandmasters (GMs) (e.g., located in a datacenter) into a plurality of PTP clusters (e.g., for load balancing). A set of PTP ordinary clocks (OCs) (e.g., time clients) may be grouped in OC Groups and then into PTP clusters for GM assignation, wherein each PTP OC includes a unicast master table (UMT) that includes one or more potential PTP GM port addresses and priority values. Based on a Master Clock Mechanism, one of the potential GMs may be selected based on one or more PTP GM priority attributes.
In one embodiment, a method includes receiving, at a client system, an audio input, where the audio input comprises a coreference to a target object, accessing visual data from one or more camera associated with the client system, where the visual data comprises images portraying one or more objects, resolving the coreference to the target object from among the one or more objects, resoling the target object to a specific entity, and providing, at the client system, a response to the audio input, where the response comprises information about the specific entity.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
H04L 51/212 - Monitoring or handling of messages using filtering or selective blocking
H04L 51/222 - Monitoring or handling of messages using geographical location information, e.g. messages transmitted or received in proximity of a certain spot or area
H04L 51/224 - Monitoring or handling of messages providing notification on incoming messages, e.g. pushed notifications of received messages
H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
An image processing system enables a user wearing a head-mounted display to experience a virtual environment combined with a representation of a real-world object. The image processing system receives a captured scene of a real-world environment that includes a target object. The image processing system identifies the target object in the captured scene and generates a representation of the target object. In some cases, the image processing system may include a graphical overlay with the representation of the target object. The image processing system can generate a combined scene that includes the target object and the virtual environment. The combined scene is presented to the user, thereby allowing the user to interact with the real-world target object (or a representation thereof) in combination with the virtual environment.
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
G06T 19/00 - Manipulating 3D models or images for computer graphics
H04N 5/445 - Receiver circuitry for displaying additional information
29.
Display screen with an animated graphical user interface
In one embodiment, a method includes receiving a voice input from a first user from a client system via an assistant xbot associated with an assistant system, determining that the voice input includes an out-of-coverage request including intents or slots that are unsupported by the assistant system, generating a contextual response for the out-of-coverage request, wherein the contextual response includes an implicit confirmation of the out-of-coverage request and a failure response to the out-of-coverage request, wherein the implicit confirmation references the voice input from the first user, and wherein the failure response indicates that the assistant system is unable to execute a task corresponding to the out-of-coverage request, and sending instructions for presenting the contextual response responsive to the voice input to the client system.
Systems and methods for smart message delivery for message handoff between communication devices and artificial reality systems are provided. In various examples, a communication device or HMD may receive a message initiating a detection of motion(s) and/or position(s) of the communication device and/or the HMD associated with a user with respect to each other. The communication device and/or the HMD may determine a movement level associated with each other, and based on the movement level relative to a predetermined threshold a message delivery process may be determined. If the predetermined threshold is not met, the message may be output to a user via audio and if the predetermined threshold is met, text associated with the message may be presented by the communication device for the user to read. The system may further monitor the movement level while the message is being output to the user, and if the predetermined threshold is no longer met, the message may be output to the user via audio by the HMD based on where the user stopped reading text of the message. In situations where the movement level increases to meet or exceed the predetermined threshold, the message may be output via text by the communication device, and the text corresponding to the portion of the message read aloud as audio by the HMD may be in a different format to enable the user to easily determine where to start reading the message.
H04L 67/12 - Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
33.
Systems and methods for screenless computerized social-media access
Systems and methods for screenless computerized social-media access may include (1) producing, via an audio speaker that is communicatively coupled to a computing device, a computer-generated verbal description of a social-media post provided via a social-media application, (2) detecting, via a microphone that is communicatively coupled to the computing device, an audible response to the social-media post from a user of the computing device, and (3) digitally responding to the social-media post in accordance with the detected audible response. Various other methods, systems, and computer-readable media are also disclosed.
H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
34.
Systems and methods for prioritizing digital social media posts within a social media interface
The disclosed systems and methods may include (1) identifying social media posts that have been posted to a social media platform by a creator account with fewer than a designated number of followers, (2) creating a social-engagement score for the creator account based on a measure of an amount of digital social engagement generated in response to the social media posts, (3) determining that the social-engagement score surpasses a threshold, (4) in response to (i) the creator account having fewer than the designated number of followers and (ii) the social-engagement score surpassing the threshold, designating the creator account as a prioritized creator account, and (5) presenting, in association with the creator account, a digital visual indicator indicating that the creator account is a prioritized creator account. Various other methods, systems, and computer-readable media are also disclosed.
Techniques for enabling users logged-out of a social networking system to access content associated with the social networking system are described. For example, based at least in part on determining a user is unassociated with a social networking system, the social networking system may present content to the user. In some examples, the amount of content available to the user may be limited, in which the user may be prompted to create an account. In some examples, logged-out users may be prompted to pick topics associated with their interests, allowing the social networking system to curate content for logged-out users.
In one embodiment, a method includes receiving touch inputs from a user corresponding to an activation trigger for an assistant system executing on a head-mounted device at the head-mounted device, accessing signals from inertial measurement unit (IMU) sensors of the head-mounted device by the head-mounted device, determining that the user is either donning or doffing the head-mounted device by an on-device don/doff detection model and based only on the signals from the IMU sensors, and overriding the activation trigger to prevent an activation of the assistant system responsive to the received touch inputs.
The disclosed system may include (1) a drive subsystem that translates along a powerline conductor, (2) a rotation subsystem that rotates a segment of fiber optic cable about the powerline conductor while the drive subsystem translates along the powerline conductor such that the segment of fiber optic cable is wrapped helically about the powerline conductor, and (3) an extension subsystem that (a) mechanically couples the rotation subsystem to the drive subsystem, and (b) selectively extends the rotation subsystem away from the drive subsystem and the powerline conductor to avoid obstacles along the powerline conductor. Various other systems and methods are also disclosed.
H02G 1/02 - Methods or apparatus specially adapted for installing, maintaining, repairing, or dismantling electric cables or lines for overhead lines or cables
Methods, systems, and storage media for determining the similarities of post content for mapping into a hyperspace. In an exemplary method, the disclosure comprises receiving a query at the processor. The method includes determining post data associated with the query. The post data comprises a plurality of posts provided to a social media platform by various users of the platform. The method includes determining, by the processor, a relationship between at least two posts of the plurality of posts. The method includes training, by the processor, a machine language model. The machine language model is based on the query and the relationship between the at least two posts. The method also generates a hyperspace based on the relationship between the at least two posts and the query.
The disclosed systems and methods may include (1) providing, within a digital help center article directed to delineating how to accomplish a digital action within an application, both (i) a description of how to manually accomplish the digital action (including a list of digital steps that, when manually initiated via user input to the application, trigger the digital action to be executed within the application) and (ii) a selectable element configured to trigger the digital action to be executed, bypassing one or more of the digital steps listed in the description, when selected via user input, (2) receiving user input digitally selecting the selectable element within the digital help center article, and (3) in response to receiving the user input digitally selecting the selectable element within the digital help center article, executing the digital action within the application. Various other methods, systems, and computer-readable media are also disclosed.
Systems, methods, and non-transitory computer-readable media can determine at least one graphical overlay created by a content creator. The at least one graphical overlay can be distributed to at least one user through a social networking system. A determination can be made of a user selection of the at least one graphical overlay. The graphical overlay can be applied to at least one content item.
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 3/04845 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
41.
DETECTING HEAD GESTURES USING INERTIAL MEASUREMENT UNIT SIGNALS
In one embodiment, a method includes presenting a suggestion to a user of a head-mounted device by the head-mounted device via an assistant xbot during a dialog session between the user and the assistant xbot, wherein the suggestion is associated with a plurality of actions to be performed by an assistant system associated with the assistant xbot, accessing signals from inertial measurement unit (IMU) sensors of the head-mounted device by the head-mounted device during the dialog session, determining a head gesture performed by the user during the dialog session by an on-device head-gesture detection model and based only on the signals from the IMU sensors, and executing a first action from multiple actions by the assistant system executing on the head-mounted device, wherein the first action is selected based on the determined head gesture during the dialog session.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0346 - Pointing devices displaced or positioned by the user; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
42.
Contextualized streaming end-to-end speech recognition with trie-based deep biasing and shallow fusion
In one embodiment, a method includes receiving a user's utterance comprising a word in a custom vocabulary list of the user, generating a previous token to represent a previous audio portion of the utterance, and generating a current token to represent a current audio portion of the utterance by generating a bias embedding by using the previous token to query a trie of wordpieces representing the custom vocabulary list, generating first probabilities of respective first candidate tokens likely uttered in the current audio portion based on the bias embedding and the current audio portion, generating second probabilities of respective second candidate tokens likely uttered after the previous token based on the previous token and the bias embedding, and generating the current token to represent the current audio portion of the utterance based on the first probabilities of the first candidate tokens and the second probabilities of the second candidate tokens.
In one embodiment, a method includes rendering a first output image comprising one or more augmented-reality (AR) objects for displays of an AR rendering device of an AR system associated with a first user. The method further includes accessing sensor signals associated with the first user. The one or more sensor signals may be captured by sensors of the AR system. The method further includes detecting a change in a context of the first user with respect to a real-world environment based on the sensor signals. The method further includes rendering a second output image comprising the AR objects for the displays of the AR rendering device. One or more of the AR objects may be adapted based on the detected change in the context of the first user.
G06F 1/3206 - Monitoring of events, devices or parameters that trigger a change in power modality
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 16/11 - File system administration, e.g. details of archiving or snapshots
G10L 15/30 - Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
Systems, methods, devices, and computer program products are provided for producing observable virtual images. Aspects may include at least one illumination source emitting light on a display and a transparent combining optic including a holographic optical element (HOE). According to various examples, light emitted from the at least one illumination source illuminates the transparent combining optic, and the transparent combining optic diffracts the light to generate an observable virtual image. The observable virtual image may be positioned to overlay a scene viewable through the transparent combining optic. Such aspects may be incorporated on a variety of technologies, such as head-mounted display systems, smart glasses, and/or AR devices.
A system and method for generating an eye box, an observable virtual image, and/or a multiplexed hologram are provided. The system may include a transparent combining optic including a holographic optical element. The transparent combining optic may be configured to diffract light received at a first side of the transparent combining optic. The light may be generated by an illumination source. The transparent combining optic may also be configured to form a virtual image viewable from a non-pupil-forming eyebox. The observable virtual image may be viewable from the first side of the transparent combining optic.
Systems, apparatuses and methods provide technology that identifies a plurality of data samples of a data set. In response to a trigger, the technology identifies a first subset of samples of the plurality of data samples that are outside a range associated with a decision boundary of a machine learning model, and removes the first subset of samples from the data set to generate a modified training set. The technology train the machine learning model based on the modified training set.
Techniques are described for a social networking system to provide a collaborative fundraising event that allow for multiple organizers of the event to publish, receive attribution, and edit fundraiser content. In some cases, the social networking system may be configured to distribute the fundraiser content based on metrics, support, and relationships between organizers and other users of the social networking system.
G06Q 30/02 - Marketing; Price estimation or determination; Fundraising
G06Q 50/00 - Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
48.
XR MEDIA CHANNELS FOR IMMERSIVE REALTIME COMMUNICATION
Systems, methods, and non-transitory computer-readable media can be configured to perform operations comprising determining capabilities associated with a plurality of devices in a communication session, determining one or more media channels for transmission of media between the plurality of devices based on the capabilities, and facilitating the transmission of media between the plurality of devices via the one or more media channels.
In one embodiment, a method for retrieving a compressed data chunk from a source memory to a data buffer using a direct-memory access includes generating a source address indicating a location in the source memory at which a metadata corresponding to a compressed data chunk is stored, reading the metadata from the source address, where the metadata includes a data address, a size and compression options associated with the compressed data chunk, reading the compressed data chunk from the source memory based on the data address and the size within the metadata, decompressing the compressed data chunk based on the compression options within the metadata, and storing the decompressed data chunk into the data buffer.
G06F 12/0802 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
G06F 13/28 - Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access, cycle steal
50.
CIRCULAR BUFFER FOR INPUT AND OUTPUT OF TENSOR COMPUTATIONS
In one embodiment, a method includes receiving a token indicating a data chunk becomes available in a first circular buffer of a pre-determined size from a direct memory access component, determining that a computation is to be performed with data including the data chunk based on the token, and generating one or more addresses corresponding to one or more data chunks within the first circular buffer that are to be retrieved for the computation, where a generated address is subtracted by the pre-determined size of the first circular buffer when the generated address is greater than a pre-determined maximum associated with the first circular buffer, and where the generated address is added by the pre-determined size of the first circular buffer when the generated address is less than a pre-determined minimum associated with the first circular buffer.
G06F 13/28 - Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access, cycle steal
In one embodiment, a method includes sending instructions for presenting a visual programming interface for a composer tool to a client system, wherein the visual programming interface comprises primitives for conversation design, wherein the primitives comprise at least an input-primitive, a response-primitive, and a decision-primitive, receiving instructions from a user for creating a conversation design flow for an application via the visual programming interface from the client system, wherein the conversation design flow comprises at least one or more input-primitives for one or more voice inputs and one or more input-primitives for one or more signal inputs, simulating an execution of the conversation design flow within the composer tool, and exporting the conversation design flow to a software package configured to be executable by the application, wherein the application is operable to process voice inputs and signal inputs according to the input-primitives of the conversation design flow.
In one embodiment, a method for accessing an instruction that is to be executed by a control agent within a computing system, decompressing the instruction by replacing each of one or more zero-symbol run-length fields in the instruction with as many continuous zero symbols as its corresponding value and removing one or more non-zero-symbol run-length fields from the instruction, determining that the instruction is spatial-delta-encoded based on a compression data header associated with the instruction, performing spatial-delta decoding on the instruction in response to the determination by orderly determining a spatial-delta-decoded value of each bit in the instruction, and causing the instruction to be sent to the control agent.
Systems and methods are provided for operating an artificial reality device using gestures. In various exemplary embodiments, an artificial reality device may receive information initiating a gesture recognition mode via the artificial reality device, and cause a camera associated with the artificial reality device to operate in the gesture recognition mode. The artificial reality device may further receive at least one image of a gesture captured by the camera and identify and evaluate the gesture. The gesture may be indicative of a user command. The artificial reality device may also execute the user command, and may generate feedback confirming execution of the user command.
The disclosed computer-implemented method may include (i) sending, to a cloud gaming server, cloud gaming input information with an input creation timestamp that marks when the cloud gaming input information was created, (ii) receiving, in response to sending the cloud gaming input information, a frame creation timestamp that marks when a frame was drawn for a virtual display at the cloud gaming server, (iii) storing the frame creation timestamp in memory, (iv) creating a frame render timestamp that marks a timing of an intercepted client-side video frame render event of translating the frame to draw a corresponding frame for a display at a cloud gaming client device, and (v) automatically providing feedback to improve a cloud gaming experience. Various other methods, systems, and computer-readable media are also disclosed.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/40 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
55.
RDMA TRANSMIT FLOW SCHEDULING AND PACING SCHEME FOR CONGESTION MANAGEMENT IN HIGH-PERFORMANCE AI/ML NETWORKS
A system for facilitating RDMA transmit flow scheduling and pacing is disclosed. The system may determine a set of groups. The groups may be associated with a range of transmission rates. The system may determine a transmission rate of a queue pair. The system may assign the QP to a first group of the set of groups. The transmission rate of the QP may be within the range of transmission rates of the first group. The system may determine an available QP of at least one group from the set of groups. The system may schedule transmission, by an arbiter, of the available QP. The system may transmit, by the arbiter, a message associated with the available QP.
H04L 47/6275 - Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
A system and method for wireless thermal management is provided. The system may receive at least one wireless link capacity estimate from a first device. The wireless link capacity estimate may indicate an estimated data transfer or transmission rate associated with the communication device. The system may receive at least one throughout estimate from a second device. The at least one throughput estimate may be associated with a data transfer rate during a time period. The system may determine, based on the at least one wireless link capacity estimate or the at least one throughput estimate, a wireless link metric indicating or denoting a power consumption or a load associated with the communication device. The system may determine, based on the wireless link metric, whether to apply a level of thermal mitigation to the communication device.
Systems and methods for eye tracking are disclosed. The system may emit, by a digital micro-mirror device(s) (DMD), light, including binary images or grayscale images, onto an eye(s) of a user. The system may determine, by single pixel detectors, a reflection of the light from the eye(s). The system may determine, by a first single pixel detector, a first signal associated with the reflection of the light from a retina of the eye(s). The system may determine, by a second single pixel detector, a second signal associated with the reflection of light from the retina of the eye(s). The system may perform, by the DMD(s), a transformation on the first and second signals to determine a grayscale image(s) of the retina. The system may determine, by the DMD(s), a 3D shape(s) of the retina based on performing a 3D reconstruction(s) associated with pixels of the grayscale image(s) of the retina.
A61B 3/113 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions for determining or recording eye movement
G16H 50/20 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
In one embodiment, a method includes, at each iteration i among N iterations of a first loop, reading first data corresponding to row i of a first tensor from a first source memory, reading second data from column i of the transpose buffer, writing the first data to column i of the transpose buffer, and causing the second data to be written to row i of a second tensor at a first destination memory and, at each iteration j among N iterations of a second loop, reading third data corresponding to row j of a third tensor from a second source memory, reading fourth data from row j of the transpose buffer, writing the third data to row j of the transpose buffer, and causing the fourth data to be written to row j of a fourth tensor at a second destination memory.
The disclosed fiber optic cable splice case may include (1) an outer enclosure with a plurality of cable funnels defining paths from an exterior to an interior of the outer enclosure, (2) a clamp connected to the exterior of the outer enclosure, where the clamp attaches the outer enclosure to a powerline conductor, and (3) an inner enclosure positioned at least partially within, and rotatably coupled to, the outer enclosure, where the inner enclosure defines (a) a splice cavity within the inner enclosure, where the cavity is configured to store an optical fiber splice tray for coupling corresponding optical fibers of each of a pair of fiber optic cable segments and (b) a cable channel about an exterior of the inner enclosure, where the cable channel carries a portion of each of the pair of segments between the funnels and the cavity. Various other components and methods are also disclosed.
A system for Remote Direct Memory Access incast congestion management is provided. The system may access one or more receive work queue entities (RWQEs) associated with one or more send work queue entities (SWQEs), determine a transmission rate based on an available bandwidth and a number of transmitter devices associated with the one or more SWQEs, and transmit a rate control notification (RCN) message to a transmitter device associated with a send work queue entity of the one or more SWQEs, wherein the rate control notification message comprises the determined transmission rate.
In one embodiment, a method includes, determining that a bmm operation between a first activation tensor and a second activation tensor needs to be performed, collecting the second activation tensor in two blocks from activation buffers of N tensor processor units, splitting each of the two blocks of the second activation tensor into an MSB tile and an LSB tile, loading the second activation tensor to weight buffers of the N tensor processor units by filling a first entry of each weight buffer of each of the N tensor processor units with contents of the MSB tiles of the two blocks and filling a second entry of the weight buffer with contents of the LSB tiles of the two blocks, and generating a bmm result using the first activation tensor distributed in the activation buffers and the second activation tensor in the weight buffers.
G06F 13/28 - Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access, cycle steal
In one embodiment, a method includes receiving a user input comprising input tokens from a client system, parsing the user input to determine ontology tokens and utterance tokens corresponding to the input tokens, decoding the ontology tokens and the utterance tokens to generate a span-based frame representation comprising intents, slots, and a span, wherein the ontology tokens are decoded into the intents and slots, and wherein the utterance tokens are decoded to determine the span comprising one or more tokens of the input tokens, wherein the span comprises a first index endpoint associated with a first token of the one or more tokens and a second index endpoint associated with a second token of the one or more tokens, and executing, responsive to the user input, one or more tasks based on the span-based frame representation.
Systems, apparatuses and methods provide technology that identifies a message that is to be transmitted across a network, divides the message into a plurality of portions that are arranged in a first order, and generates a plurality of packets based on the plurality of portions. The technology maps different network paths for the plurality of packets to be transmitted to a destination, sets headers of the plurality of packets to represent the first order and the different network paths, transmits the plurality of packets over the network in an out-of-order fashion to the destination based on the headers, and arranges the plurality of transmitted packets into the first order based on the headers of the plurality of packets.
Three-dimensional application specific integrated circuit (IC) architecture is described herein. In one aspect, an IC may include a first die including: a first semiconductor layer; a plurality of processing elements (PEs) located on the first semiconductor layer; and a first interface region of the first semiconductor layer, electrically coupled to the plurality of PEs and configured to communicate electrical signals with the plurality of PEs; a second die including: a second semiconductor layer; a plurality of IC elements located on the second semiconductor layer; and a second interface region of the second semiconductor layer, electrically coupled to the plurality of IC elements and configured to communicate electrical signals to the plurality of IC elements, where the first interface region and the second interface region are electrically coupled to each other and configured to transmit electrical signals between the plurality of PEs and the plurality of IC elements.
H01L 25/16 - Assemblies consisting of a plurality of individual semiconductor or other solid state devices the devices being of types provided for in two or more different main groups of groups , or in a single subclass of , , e.g. forming hybrid circuits
H01L 23/00 - SEMICONDUCTOR DEVICES NOT COVERED BY CLASS - Details of semiconductor or other solid state devices
H01L 23/48 - Arrangements for conducting electric current to or from the solid state body in operation, e.g. leads or terminal arrangements
H10B 80/00 - Assemblies of multiple devices comprising at least one memory device covered by this subclass
65.
SUPPLEMENTING USER PERCEPTION AND EXPERIENCE WITH AUGMENTED REALITY (AR), ARTIFICIAL INTELLIGENCE (AI), AND MACHINE-LEARNING (ML) TECHNIQUES UTILIZING AN ARTIFICIAL INTELLIGENCE (AI) AGENT
According to examples, a system for supplementing user perception and experience via augmented reality (AR), artificial intelligence (AI), and machine-learning (ML) techniques is described. The system may include a processor and a memory storing instructions. The processor, when executing the instructions, may cause the system to receive data associated with at least one of a location, context, or setting and determine, using at least one artificial intelligence (AI) model and at least one machine learning (ML) model, relationships between objects in the at least one of the location, context, or setting. The processor, when executing the instructions, may then apply an artificial intelligence (AI) agent to analyze the relationships and generate a three-dimensional (3D) mapping of the at least one of the location, context, or setting and provide an output to aid a user's perception and experience.
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 10/84 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using probabilistic graphical models from image or video features, e.g. Markov models or Bayesian networks
Exemplary embodiments relate to the application of coordinated or cooperative effects to a coordinated activity. A coordinated activity protocol is provided for synchronizing interactive effects and experiences across multiple devices. The protocol allows effects to be initiated and coordinated on multiple devices via a real-time communications (RTC) channel. Messages may be exchanged to cooperatively initiate the effect, and (once initiated) generic data may be exchanged via application programming interface (API) calls. According to some embodiments, the coordinated activity is a shared video experience, such as a video conference or shared video-watching experience. According to other embodiments, the coordinated activity relates to other types of experiences, such as single- or multi-player games, shared book reading, communal interactions with a photo album, etc.
In one embodiment, a method includes accessing an image of one or more target buildings captured by a device; detecting target building planes of the one or more target buildings in the image using a machine learning program; for each of the target building planes: generating a target embedding and target feature points for the target building plane; querying a three-dimensional (3D) map using the target embedding to identify at least one matching building plane for the target building plane; using the target feature points to identify matching feature points of the at least one matching building plane; and determining three-dimensional (3D) locations of the matching feature points using the 3D map; and determining a pose of the device by comparing the target feature points of the target building planes and the 3D locations of the matching feature points associated with the target building planes.
The disclosed systems and methods may include (1) receiving user input from a first user of a social media platform selecting to invite a second user to co-view a social media feed, (2) in response to receiving the user input, creating a digital conferencing room configured to stream a display of the social media feed, and (3) adding the first and second user to the digital conferencing room. Various other methods, systems, and computer-readable media are also disclosed.
Systems and methods for providing a camera of a device as a virtual camera for one or more other communication devices are disclosed. The device may receive one or more requests from one or more communication devices to utilize a camera of the device as a virtual camera of the one or more communication devices. The device may allow access to the camera of the device to at least one of the one or more communication devices. The device may enable the at least one communication device to utilize the camera of the device to capture one or more images or videos. The device may enable provision of the captured one or more images or videos to the at least one communication device to enable display of the captured one or more images or videos by the at least one communication device.
According to examples, a system for providing generation and delivery of a composite content item associated with a primary content item is described. The system may include a processor and a memory storing instructions. The processor, when executing the instructions, may cause the system to access a primary content item, provide a supplemental content item associated with the primary content item, and generate a composite item based on the primary content item and the supplemental content item. The processor, when executing the instructions, may then enable sponsorship of the composite content item and enable publication of the composite content item.
H04N 21/431 - Generation of visual interfaces; Content or additional data rendering
H04N 21/4725 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content for requesting additional data associated with the content using interactive regions of the image, e.g. hot spots
71.
INTEGRATED CHIPLET-BASED CENTRAL PROCESSING UNITS WITH ACCELERATORS
In some embodiments, a system-on-chip, includes a central processing unit (CPU); an accelerator coupled to the CPU via a first die-to-die interconnect; and uniform memory coupled to the CPU via a second die-to-die interconnect. In some embodiments, in order to prevent use of accelerator memory for processing operations by the accelerator, the accelerator utilizes a uniform memory access tunneling system located in the accelerator to tunnel a high-level interconnect protocol associated with the second die-to-die interconnect to a die-to-die interconnect protocol associated with the first die-to-die interconnect, the uniform memory access tunneling system being configured to allow access to the uniform memory using a shared address space.
In one embodiment, a method includes accessing a text input and a scene input corresponding to the text input, wherein the scene input comprises semantic segmentations, generating text tokens for the text input and scene tokens for the scene input by machine-learning models, generating predicted image tokens based on the text tokens and the scene tokens by the machine-learning models, and generating an image corresponding to the text input and the scene input based on the predicted image tokens by the machine-learning models.
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
73.
Reading Out Scene Analysis for Users Wearing Head-mounted Devices
In one embodiment, a method includes accessing sensor signals associated with a scene in a field of view of cameras of a head-mounted device associated with a first user from the head-mounted device, determining whether one or more criteria are satisfied for triggering a proactive analysis of the scene based on the sensor signals by machine-learning models, analyzing the scene to identify objects of interest to the first user in the scene based on the determination that the criteria are satisfied, generating a description of the objects based on an analysis of the objects and the sensor signals, wherein the description comprises relational information associated with one or more of the objects, and providing instructions to the head-mounted device to present an audio readout of the description to the first user.
In some embodiments, a method includes receiving, at a central processing unit (CPU)-based demultiplexer of a CPU, an input video data stream; performing, at the CPU, an accelerator decoding configuration assessment of an accelerator decoding configuration of an accelerator; and based upon the accelerator decoding configuration assessment, dynamically decoding CPU-based demultiplexer output from the CPU-based demultiplexer using a CPU-based decoding unit and an accelerator-based decoding unit. In some embodiments of the method, the accelerator decoding configuration assessment includes performing an accelerator-based decoding unit hardware configuration assessment of the accelerator-based decoding unit and a CPU-based decoding unit software configuration assessment of the CPU-based decoding unit software utilized for the CPU-based decoding unit.
H04N 19/436 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
H04N 19/85 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
75.
INTEGRATED CHIPLET-BASED CENTRAL PROCESSING UNITS WITH ACCELERATORS FOR SYSTEM SECURITY
In some embodiments, a computer-implemented method includes receiving, at a security agent of a host central processing unit (CPU), accelerator firmware from flash memory; determining, at the security agent, whether the accelerator firmware includes a critical accelerator firmware component or a non-critical accelerator firmware component; authenticating, at the security agent, the critical accelerator firmware component instantaneously upon a determination that the accelerator firmware is the critical accelerator firmware component, wherein authenticating the critical accelerator firmware component yields an authenticated critical accelerator firmware component; and providing the authenticated critical accelerator firmware component to an accelerator via a sideband bus for execution at the accelerator.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
Methods, systems, and apparatuses for calculating and using uplink receive beamformers in wireless devices. In a demodulation reference signal (DMRS)-based beamformer calculation scheme, the computational complexity might be overwhelmingly high if a unique beamformer is calculated for each and every single resource element (RE). There may be combinations via frequency, time, or other means across resource block groups (RBGs) to consolidate the number of beamformers to be calculated.
H04B 7/06 - Diversity systems; Multi-antenna systems, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
H04L 5/00 - Arrangements affording multiple use of the transmission path
Systems and methods for providing contextual collaborating for conferencing are disclosed. In various examples, a system may include a computing device that may establish a remote network connection with a conferencing device configured to enable at least one of audio or visual communication between a group of participants. The computing device may communicate, via radio signals, with a device associated with, on, and/or near a first participant. The computing device may determine a location of the device based on received radio signals and may authenticate the first participant with the conferencing device to enable communication between the first participant and the group of participants. The computing device may also determine, in real time, an interaction of the first participant with the conferencing device, based on the received radio signals emitted from the device.
An audio processing system includes: a receiver configured to receive the original audio data; a processor configured to execute the instructions stored in the memory to cause the audio processing system to: separate a background noise audio data, a first speaker audio data, and a second speaker audio data; recognize first speaker speech, convert the first speaker speech to first speaker text, translate the first speaker text to a second language text, and convert the second language text to a second speech; recognize second speaker speech, convert the second speaker speech to second speaker text, translate the second speaker text to the second language text, and convert the second language text of the second speaker to a second speech for the second speaker; and generate encoded audio data; and a transmitter configured to transmit the encoded audio data to a content user device.
In one embodiment, a method includes, by one or more computing devices, receiving, from a client system of a first user of an online social network, a text post inputted by the first user, parsing the text post to identify a query associated with the text post, sending, to the client system, instructions for presenting a confirmation prompt requesting confirmation of the identified query from the first user, receiving, from the client system, a confirmation of the identified query from the first user, generating, in response to receiving the confirmation, a recommendation list responsive to the query, wherein the recommendation list comprises references to one or more objects referenced in one or more prior comments associated with one or more prior posts of the online social network associated with the query, and sending, to the client system, instructions for presenting the recommendation list to the first user.
Systems, apparatuses and methods provide technology that determines that a primary slice of a plurality of slices of a communication network is overloaded. The plurality of slices includes a backup slice. The technology assigns user equipment to operate with the backup slice in response to the primary slice being overloaded. The technology identifies that a trigger has occurred, wherein the trigger is associated with one or more of the user equipment, the backup slice or the primary slice, and re-assigns the user equipment to the primary slice from the backup slice so that the user equipment operates with the primary slice.
Systems, apparatuses and methods provide technology that identifies a first post that is submitted to a first group of a social network. The technology identifies that the first post is a cross-pollination candidate, identifies a second group of the social network, generates a first vector that is to represent one of the first post or the first group, generates a second vector that is to represent the second group, determines whether the second group matches a cross-pollination criteria based on a comparison of the first vector to the second vector, and determines whether to automatically generate a second post based on the first post, and submit the second post to the second group based on whether the second group matches the cross-pollination criteria.
G06F 18/2325 - Non-hierarchical techniques using vector quantisation
G06F 18/2411 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
82.
Learning Neural Light Fields with Ray-Space Embedding Networks
In particular embodiments, a computing system may access a set of training images for training a neural light field network for view synthesis. Using each training image, the computing system may train the neural light field network by casting, for each pixel of the training image, a ray into a three-dimensional (3D) space, the ray including integrated radiance along the ray, mapping first ray coordinates of the ray into an embedding network, transforming, using the embedding network, the first ray coordinates into second ray coordinates, applying positional encoding to second ray coordinates, generating, using the neural light field network, a predicted color value for the pixel based on positionally encoded second ray coordinates, comparing the predicted color value for the pixel with a ground-truth color value for the pixel, and updating the neural light field network and the embedding network based on comparison.
A system and method for providing contextually-based content displays are provided. The system may include a device having a display providing content in a viewable region of the display. The device may include a camera device configured to track a scene of a real-world environment captured in a field of view of the camera device. The device may determine a region of interest in the scene. The device may perform object recognition on the scene tracked by the camera device. The device may determine an environmental interaction based on the object recognition and the region of interest. The device may adaptively alter, based on the environmental interaction, a position of the content provided by the display.
In one embodiment, a method includes receiving a user input including an ambiguous mention by a user from the user's client system, analyzing the user input to identify intents and slots based on a natural-language understanding module, where at least one slot is an ambiguous slot corresponding to the ambiguous mention, identifying candidate entities corresponding to the ambiguous slot, ranking the candidate entities based on the user's user profile data, sending to the user's client system instructions for prompting the user to select a candidate entity from a set of candidate entities having a rank greater than a threshold rank, where the selected candidate entity is resolved to the ambiguous slot, and storing the selected candidate entity together with the ambiguous mention as a mention-entity pair as part of the user profile data responsive to receiving the selection from the user.
A system including a machine-learning accelerator (MLA) hardware comprising computation-control units that each have a programmable dependency matrix; and a compiler computing module configured to generate, based on a machine-learning model, dependency instructions indicating dependencies between the computation-control units; wherein the computation-control units include at least: a first computation-control unit configured to generate, after completion of a first operation, a synchronization token representing the completion of the first operation, the synchronization token specifying a recipient identifier for an intended recipient computation-control unit of the synchronization token; a second computation-control unit configured to: configure the programmable dependency matrix of the second computation-control unit according to the dependency instructions to include dependency conditions for performing operations; receive the synchronization token based on the recipient identifier; update a dependency state to reflect the received synchronization token; and execute an operation in response to a determination that the dependency state satisfies the dependency condition.
The disclosed computer-implemented method may include detecting, by a computing device, a command to clip a video segment from a video stream. The method may also include extracting, by the computing device, the video segment from the video stream based on the command. Additionally, the method may include automatically identifying, by the computing device, one or more points of interest in the video segment. Furthermore, the method may include determining, by the computing device, one or more sets of coordinates to crop the video segment based on a predetermined aspect ratio and the one or more points of interest. Finally, the method may include creating, by the computing device, a short-form video by cropping and reformatting the video segment based on the one or more sets of coordinates. Various other methods, systems, and computer-readable media are also disclosed.
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
G06T 7/70 - Determining position or orientation of objects or cameras
G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
87.
Software Call Translations for On-Device Machine Learning Execution
Aspects of the present disclosure are directed to translating application calls for on-device machine learning execution. A translation layer supports on-device machine learning execution by translating JavaScript software application call data to achieve interoperability with on-device machine learning models. For example, JavaScript software applications interact with data, such as images, audio, video, and/or text, in a format or data type that is compatible with the application. On the other hand, machine learning models interact with data in a form conducive to mathematical operations, such as a data structure representation (e.g., tensor representation). Implementations translate data types and/or data files to provide compatible data to each of a native JavaScript software application and on-device machine learning models. The translation layer can translate JavaScript application calls to provide compatible data to the machine learning model(s), and output from the machine learning model(s) to provide compatible data to the JavaScript application.
Systems, apparatuses and methods provide technology that identifies historical data for a first user, where the historical data identifies previous posts by the first user. The technology identifying an event. The technology determines an affinity score for the first user based on the historical data and the event, determines whether the affinity score meets a threshold, and if the affinity score meets the threshold, determines one or more of a phrase or an effect to present to the first user.
H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
89.
LOCATION AND BEHAVIOR BASED PREFETCH AND CACHING OF REMOTE DATA
A method for prefetching and caching data onto a device is disclosed. An association of a user with a travel to a destination location is determined. Data network availability in at least a portion of a likely future travel path from a current location of the user to the destination location is analyzed. Based on the analysis of the data network availability, a segment of the likely future travel path where data network is likely unreliable is identified. Data likely desired by the user when the user travels in the identified segment is predicted. A device of the user is caused to obtain and cache the predicted data.
A system and method for Crest Factor Reduction (CFR) pulse cancellation (PC) in a single carrier environment or multicarrier telecommunication environment may enable obtaining better signal quality while maintaining Peak to Average Power Ratio (PAPR) and/or power efficiency. A PC signal is generated by multiplying a truncated sinc signal with another window signal. The bandwidth of the PC signal may be greater than the bandwidth of the corresponding carrier signal center. The center frequency of the PC signal may be offset with respect to the center frequency for each given carrier in the multi-carrier scenario to fix the edge effect signal quality/interference problems.
Systems, apparatuses and methods provide technology that generates first data, stores the first data in a data storage of a first computing device to generate first stored data, transmits the first data as first test data along a first integrity path comprising at least one first hardware device. The technology further receives, with the first computing device, the first test data from the first integrity path and compares the first stored data to the received first test data to determine if a first data path error exists in the first integrity path.
G06F 11/263 - Generation of test inputs, e.g. test vectors, patterns or sequences
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
92.
Locality Sensitive Hashing to Generate N-dimensional Vectors of Risks and Conduct Risk Analysis
Systems, apparatuses and methods provide technology that identifies first characteristics of a first risk associated with a system and applies a locality sensitive hashing process to the first characteristics to map the first characteristics to first buckets of a plurality of buckets. The technology further generates a first vector based on the first characteristics being mapped to the first buckets, and identifies a mitigation plan to at least partially mitigate the first risk based on the first vector.
In one embodiment, a method includes receiving a first portion of a speech input from a first user from a first client system associated with the first user during a first turn of a dialog session, wherein the first user is in a multi-channel audio communication with one or more second users, determining an intent of the first user to speak to an assistant system privately based on contextual information associated with the first portion of the speech input during the first turn of the dialog session, and sending, to the first client system responsive to determining the intent of the first user to speak to the assistant system privately and during the dialog session, instructions for muting audio transmission of subsequent second portions of the speech input from the first user during the dialog session to one or more of the second users in the multi-channel audio communication.
In one embodiment, a method includes presenting a suggestion to a user of a head-mounted device by the head-mounted device via an assistant xbot during a dialog session between the user and the assistant xbot, wherein the suggestion is associated with a plurality of actions to be performed by an assistant system associated with the assistant xbot, accessing signals from inertial measurement unit (IMU) sensors of the head-mounted device by the head-mounted device during the dialog session, determining a head gesture performed by the user during the dialog session by an on-device head-gesture detection model and based only on the signals from the IMU sensors, and executing a first action from multiple actions by the assistant system executing on the head-mounted device, wherein the first action is selected based on the determined head gesture during the dialog session.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0346 - Pointing devices displaced or positioned by the user; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
95.
Techniques for generating training data for machine learning enabled image enhancement
Systems and methods for generating training data using synthesized input images for training a machine learning model for image enhancement in accordance with embodiments of the invention are described. The system may access a target image (e.g., captured by an imaging device), and generate an input image that corresponds to the target image. The input image and target image may then be used (e.g., as part of a training data set) to train the machine learning model. For example, a generated input image corresponding to a target image may represent a version of the target image as if it were captured in low light. The target image may be a target illuminated output to be generated by enhancing the input image. The input image and target image may be used to train a machine learning model to enhance images such as those captured in low light.
An online system customizes video conversations between users of the online system. During a video conversation, the online system presents a composite view to the participating users. The composite view may include visual representations of the users, a background graphic, or other types of graphics such as masks and props that the users can wear or interact with in the environment of the video conversation. The visual representations may be generated based on a live video feed of the users or include avatars of the users. The online system can determine the graphics based on information about the users. For instance, the online system determines a background graphic showing a location that the users have each visited. Upon viewing the background graphic, the users may be encouraged to interact with the background graphic or other graphics included in the composite view, which can promote an engaging video conversation experience.
A method for selecting a radio resource management scheme is disclosed. A distance between two wireless access points of two wireless local area networks is received. A calculated signal strength corresponding to the distance is determined. One or more signal strengths associated with communication between the two wireless access points are received. A difference between a first value associated with the received one or more signal strengths and a second value associated with the calculated signal strength is determined. The difference is compared to a threshold difference value to determine an environment assessment result. Based on the environment assessment result, an option among a plurality of radio resource management scheme options is selected for at least the two wireless access points.
Technology for identifying an object of interest includes obtaining object embeddings for a plurality of objects in an image, obtaining text embeddings for text associated with the image, determining, for each of the plurality of objects, a similarity score via a similarity model based on the text embeddings and the object embeddings, while bypassing use of bounding box coordinates, and selecting the object having the highest similarity score as the object of interest. In another example, technology for identifying an object of interest includes obtaining object embeddings for a plurality of objects in an image, obtaining text embeddings and text identifiers for text associated with the image, generating, via a single transformer encoder, a set of CLS embeddings based on the text embeddings and the object embeddings, and determining, via a neural network, the object of interest based on the CLS embeddings.
G06V 10/74 - Image or video pattern matching; Proximity measures in feature spaces
G06V 10/22 - Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/774 - Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/62 - Text, e.g. of license plates, overlay texts or captions on TV images
99.
Systems and methods for automated generation of customized content
Systems, methods, and non-transitory computer-readable media can be configured to determine a life story associated with a first user of a content sharing system. The life story can comprise a plurality of content items associated with the first user. The plurality of content items can be organized in chronological order. A determination is made that a second user has accessed the life story associated with the first user. The plurality of content items included in the life story are provided for presentation in a graphical user interface, wherein each content item is presented individually based on the chronological order.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
G06F 3/04847 - Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
H04L 51/212 - Monitoring or handling of messages using filtering or selective blocking
H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
In one embodiment, a method for iteratively transferring a plurality of non-contiguous blocks of data from a source memory to a destination memory through n-dimensional loops without being re-programmed by a direct memory access within a machine-learning accelerator includes reading a first block of data from a first address of the source memory, processing the first block of data with an ingress modification function, and storing the first block of data to a second address of a data buffer, by an ingress component of the direct memory access within the machine-learning accelerator, and reading a second block of data from a third address of the data buffer, processing the second block of data with an egress modification function, and storing the second block to a fourth address of the destination memory, by an egress component of the direct memory access within the machine-learning accelerator.
G06F 13/28 - Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access, cycle steal
G06F 12/1081 - Address translation for peripheral access to main memory, e.g. direct memory access [DMA]