Some embodiments of the present disclosure are directed to optical connectors and methods of assembling the same. For example, the present disclosure provides for a “semi-detachable” connector. A mechanical receptacle may be actively aligned to the photonic integrated circuit die, allowing for full testing of the device and for a simplified assembly process. At a later step in the assembly process, the connector may be placed on the receptacle (passively—already tested) and the connector may be adhered to the receptacle. The present disclosure may result in a simplified assembly, and a smaller device size that fits inside an octal small form factor pluggable (OSFP) transceiver.
A network adapter including a host interface, a network interface, packet processing circuitry, and Translation-as-a-Service (TaaS) circuitry. The host interface is to communicate with a host over a peripheral bus. The network interface is to send and receive packets to and from a network for the host. The packet processing circuitry is to process the packets. The TaaS circuitry is integrated in the network adapter and is to (i) receive from a requesting device a request to translate an input address into a requested address in a requested address space, (ii) translate the input address into the one or more requested addresses, and (iii) return the one or more requested addresses to the requesting device.
G06F 12/1072 - Decentralised address translation, e.g. in distributed shared memory systems
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
Some embodiments of the present disclosure are directed to wafer alignment in multiple dies. For example, a receptacle wafer and a photonic wafer may be prepared containing a plurality of individual dies. Further, these two wafers may be aligned, wafer bonded, and cut into the individual dies. Additionally, or alternatively, these individual dies may be ready to be attached to a substrate and require no further alignment. The method of the present disclosure may be (i) cost effective since a single, passive receptacle wafer alignment results in multiple dies, (ii) repeatable (e.g., less variance in production) since it utilizes silicon lithography alignment features and scalable silicon WOW assembly, and (iii) improve optical performance since the thin receptacle wafer has a lower height resulting in a shorter optical path.
A system includes one or more processors to trace one or more packets transmitted by an application distributed among a plurality of computing nodes. The one or more processors are to generate tracing data based at least in part on tracing the one or more packets. The tracing data includes temporal information associated with transmission of the one or more packets. The one or more processors are to manage a data allocation associated with the application based on the tracing data.
Some embodiments of the present disclosure are directed to an optical waveguide for co-packaged optics packages. For example, a module may include a substrate having a substrate optical waveguide, an interposer disposed on a surface of the substrate, where the interposer comprises an interposer optical waveguide, and where the interposer is configured to optically align the interposer optical waveguide with the substrate optical waveguide, a main die disposed on a surface of the interposer, and a photonic IC disposed on the surface of the interposer and configured to be in optical communication with the interposer optical waveguide. Additionally, or alternatively, the substrate optical waveguide may be configured to convey optical signals between the substrate and the interposer. Further, the interposer optical waveguide may be configured to convey optical signals between the surface of the substrate and the interposer.
H01L 25/16 - Assemblies consisting of a plurality of individual semiconductor or other solid-state devices the devices being of types provided for in two or more different subclasses of , , , , or , e.g. forming hybrid circuits
G02B 6/42 - Coupling light guides with opto-electronic elements
H01L 23/00 - Details of semiconductor or other solid state devices
Systems and methods are described herein for drop port assisted resonance detection for ring assisted Mach-Zehnder Interferometers (RAMZI). An example system comprises a ring assisted Mach-Zehnder Interferometer (RAMZI) that includes a Mach-Zehnder Interferometer (MZI) and a ring resonator, a drop port operatively coupled to the ring resonator, and a control circuit operatively coupled to the drop port and the RAMZI. The drop port is configured to capture an optical signal indicative of an output power spectrum of the ring resonator, and the control circuit is configured to tune the RAMZI for spectral alignment between the MZI and the ring resonator based on at least the optical signal.
G01B 11/27 - Measuring arrangements characterised by the use of optical techniques for measuring angles or tapersMeasuring arrangements characterised by the use of optical techniques for testing the alignment of axes for testing the alignment of axes
7.
DIGITAL SIGNAL SYMBOL DECISION GENERATION WITH CONFIDENCE LEVEL BASED ON ERROR ANALYSIS
A receiver including a first component to receive a signal including a sequence of symbols and generate an equalized signal with an estimated sequence of symbols corresponding to the signal. The receiver further includes a second component to generate, based on the equalized signal, a decision including a sequence of one or more bits that represent each symbol of the estimated sequence of symbols. The second component of the receiver further generates a confidence level corresponding to the decision, wherein the confidence level is based on a comparison of a first probability that the equalized signal comprises two or more errors and a second probability that the equalized signal comprises zero errors.
H03M 13/15 - Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes
H03M 13/11 - Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
A computer system includes a processor and a Duplicate Write Circuit (DWC). The DWC is to hold a definition that specifies an address range and a plurality of additional address ranges, and to receive, from the processor, a write command that specifies a write-data and a write-address. When the write-address falls outside the address range, the DWC is to generate a write cycle that writes the write-data to the address. When the write-address falls in the address range, the DWC is to generate (i) the write cycle that writes the write-data to the address, and (ii) a sequence of additional write cycles that write the write-data to corresponding addresses in the additional address ranges.
Systems, devices, and methods are provided. In one example, a system is described that includes circuits to receive a plurality of numbers at a first computing system, perform an operation on the plurality of numbers to generate a number, and use the generated number to perform stochastic rounding of a floating point number to generate a stochastically rounded floating point number.
G06F 7/499 - Denomination or exception handling, e.g. rounding or overflow
G06F 7/483 - Computations with numbers represented by a non-linear combination of denominational numbers, e.g. rational numbers, logarithmic number system or floating-point numbers
10.
ELECTRONIC DEVICE INCLUDING LIQUID COOLANT CONDUIT WITH HELICAL PORTION
An electronic device, which may include an electronic component, a cooling body in thermal contact with the electronic component, a conduit coupled to the cooling body to deliver a coolant to or from the cooling body, the conduit comprising a coiled portion, and a coupler coupled to the coiled portion of the conduit, the coupler being removably couplable to a coolant infrastructure coupler.
In one embodiment, a system includes a network device application-specific integrated circuit (ASIC), which includes a microcontroller to provide a command to a hardware accelerator to perform a job including gathering telemetry data from at least one hardware unit and write the gathered telemetry data to a memory, and the hardware accelerator to gather the telemetry data from the at least one hardware unit and write the gathered telemetry data to the memory based on the command.
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
G06F 9/48 - Program initiatingProgram switching, e.g. by interrupt
H04Q 9/00 - Arrangements in telecontrol or telemetry systems for selectively calling a substation from a main station, in which substation desired apparatus is selected for applying a control signal thereto or for obtaining measured values therefrom
A system for transmitting data is described, among other things. An illustrative system is disclosed to include one or more circuits to perform receive-side scaling (RSS) by receiving a packet, identifying one or more bits in the packet, and forwarding the packet to a receiving queue based on the identified one or more bits in the packet.
A device, systems, and method are described which provide low memory determination of an nth percentile. The device, systems, and method include receiving user input indicating a percentile (n) to be approximated for a measurement. The device, systems, and method further include initializing an nth-percentile estimator to an initial value and using a control loop to update the nth-percentile estimator until n% of samples are lower than the nth-percentile estimator and 100%-n% of samples are higher than the nth-percentile estimator, wherein for each value in a set of data, the nth-percentile estimator is updated based on whether each value is higher or lower than the nth-percentile estimator.
In one embodiment, a responder device includes a network interface to receive packets of a stream of packets transmitted from a requester device with packet sequence numbers, and packet processing circuitry to collect information about the packet sequence numbers of the packets that have been received from the requester device, generate a selective acknowledgement including an indication of the packet sequence numbers of at least one packet of the packets that has been received and at least one other packet of the packets that has not been received by the responder device from the requester device, wherein the at least one packet that has been received by the responder device includes at least one of the packets received out-of-order according to the packet sequence numbers, and send the selective acknowledgement to the requester device via the network interface.
A network device includes a hardware-implemented packet processing pipeline includes: multiple pipeline stages, and a processor. The hardware-implemented packet processing pipeline is to process packets exchanged with a packet network. The processor is to execute sideband tasks for the packet processing pipeline. At least one of the pipeline stages is to trigger the processor to execute a sideband task by posting a Completion-Queue Element (CQE) on a Completion Queue (CQ) accessible to the processor.
A system for predicting and/or capturing data relating to anomalies in a networking device is provided. In one example, a networking device receives telemetry data, stores the telemetry data in a cyclic buffer, detects an anomaly, and outputs the telemetry data from the cyclic buffer. The telemetry data from the cyclic buffer may be used for training a prediction model. In another example, a trained prediction model analyzes telemetry data sampled at a first rate, predicts a future anomaly, and in response to the prediction of the future anomaly, triggers sampling of the telemetry at a second rate, faster than the first rate.
H04L 67/12 - Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
H04L 67/147 - Signalling methods or messages providing extensions to protocols defined by standardisation
In one embodiment, a sender node includes packet processing circuitry to associate, at layer 2 or 3 of Open Systems Interconnection (OSI) model, path identifiers and per-path packet sequence numbers with packets, wherein the path identifiers identify paths from the sender node to receiver nodes, and an interface to send the packets with the associated packet sequence numbers and path identifiers to the receiver nodes.
Technologies for spreading a burst of data across multiple network paths in remote direct memory access (RDMA) over converged Ethernet (ROCE) and InfiniBand are described. A RDMA adapter receives, from a requestor device over a local interface, a request to send data of a transport flow directed to a target device over a network interface, and one or more parameters being related to a multipath selection by the network controller. The RDMA adapter sends a first burst of data of the transport flow via a first network path to the target device. The RDMA adapter identifies, using the one or more parameters, a second network path to the target device. The RDMA adapter sends the second burst of data to the target device on the second network path.
A peripheral device includes a bus interface and circuitry. The bus interface is to exchange bus transactions over a peripheral bus that permits out-of-order transfer of at least some of the bus transactions. The circuitry is to generate a plurality of streams of the bus transactions, to select, from among the plurality of streams, one or more streams for which transaction ordering is required, to enforce the transaction ordering among the bus transactions of the selected streams, and to send the bus transactions via the bus interface to the peripheral bus.
Approaches presented herein provide for the reduction of unwanted electrical reflections caused by impedance mismatches at an input of an optical modulator device, such as at the interface between a (radio frequency) signal source and an electro-absorption modulated laser (EML). Reflections can be reduced though use of one or more electrical filters, such as resistor-capacitor (RC) filters, that can be placed at the input of the EML device to reduce reflections through impedance matching at that location, while maintaining the efficiency and bandwidth of the modulator for high bandwidth transmission. Such a filter can be used with a single ended or differential EML device, and can be integrated on an EML chip or added as discrete components on a chip carrier on which the EML chip is supported.
Systems, computer program products, and methods are described for efficient link-down management. An example transmitter detects an impending link-down event at the transmitter. Once detected, the transmitter encodes the link-down event within a control block. The encoded control block is then transmitted via a physical layer of the communication network to a receiver. Once the control block is transmitted, the transmitter then initiates the link-down event. An example receiver receives the control block via a physical layer of the communication network from a transmitter. Then, the receiver extracts, from the control block, an operational code (opcode) identifying an impending link-down event at the transmitter. In response, the receiver retrieves, from a database, a responsive action corresponding to the link-down event based on the extracted opcode and subsequently executes the responsive action.
Systems and methods are described for in-band spectral cross-talk monitoring. An example system includes a built-in self-test (BIST) and logic circuitry and a processor. The processor is operatively coupled to the BIST and logic circuitry, a first micro ring modulator (MRM) associated with a first data packet (FD), and a second MRM associated with a second data packet (SD). The processor is configured to: receive, from the first MRM, a complement of the first data packet (FD) that comprises second MRM spectral cross-talk data; receive, from a second MRM, a complement of the second data packet (SD); and determine, using the BIST and logic circuitry, a spectral ordering of the FD and the SD based on at least the second MRM spectral cross-talk data and the SD to address shifting in the initial mapping of the positional order of the MRMs and the spectral order of the data packets.
H04B 10/073 - Arrangements for monitoring or testing transmission systemsArrangements for fault measurement of transmission systems using an out-of-service signal
H04B 10/80 - Optical aspects relating to the use of optical transmission for specific applications, not provided for in groups , e.g. optical power feeding or optical transmission through water
Systems, switches, network endpoints, and methods are provided. In one example, a system is described that includes a latency measurement circuit to measure traffic on a network from an endpoint sender to an endpoint receiver across multiple paths. The system also includes a packet marking circuit to provide a routing mark for a packet destined for the endpoint receiver according to a network traffic measurement provided by the latency measurement circuit, where the routing mark provides an indication that supports routing for the packet to reach the endpoint receiver via a chosen path or subset of paths among the multiple paths.
Systems and methods herein are for at least one execution unit that can perform an inference using a machine learning (ML) model and that is coupled to a video encoder, where the ML model can determine a genre associated with received frames of a media stream based in part on using ML model features associated with different genres, where the video encoder can encode the media stream based in part on the determined genre.
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
H04N 19/139 - Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
25.
SEQUENTIAL EVENT TRACKER WITH LOOSE ACCESS ATOMICITY
A device includes a cache to store an array that tracks occurrence of events and a processing device coupled to the cache. The processing device tracks control values associated with a state of cache lines of the array. The processing device tracks, within the cache lines, a window of cell index values corresponding to event identifier values and having a window size that moves with an occurrence of any event that is positioned beyond the window. In response to detecting an event, the processing device updates a first value of a particular cache line of the cache lines based on a control value that corresponds to the particular cache line and on whether the first value is located within a range of cell index values currently defined by the window.
A device may include: a frame having an interior; an electronic component; a heat conducting body in thermal contact with the electronic component; a conduit containing a liquid coolant, the conduit being coupled to the heat conducting body to deliver the liquid coolant to and from the heat conducting body; and a pump positioned within the interior of the frame, the pump being removably insertable into the interior of the frame and being removably couplable to the conduit to circulate the liquid coolant through the conduit.
An interconnect device is provided. In one example, an interconnect device includes ports and a power profile controller to receive a power profile, monitor one or more of data traversing the switch and power consumption of the switch, and during a first time period determine at least one of an ingress bandwidth exceeds a first bandwidth threshold and the power consumption exceeds a first power threshold. At least one of the first bandwidth threshold and the first power threshold is defined in the power profile. During the first time period, the power profile controller is to, in response to determining the at least one of the ingress bandwidth exceeds the first bandwidth threshold and the power consumption exceeds the first power threshold, limit one or more of the data traversing the switch and the power consumption of the interconnect device.
Systems and methods herein are for at least one circuit that can determine that a network reference associated with a first network hardware is subject to a network performance degradation in a network, can cause suspension of traffic flow associated with the network reference, can save configuration for at least the network reference at a node associated with the first network hardware, and can cause the configuration to be deployed in a second network hardware so that the network reference that was previously in the first network hardware is provided from the second network hardware to resume the traffic flow.
H04L 41/0823 - Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
H04L 12/24 - Arrangements for maintenance or administration
H04L 41/0816 - Configuration setting characterised by the conditions triggering a change of settings the condition being an adaptation, e.g. in response to network events
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
29.
INTERRUPT MODERATION ONLOAD FOR ACCELERATED DATA PATHS
Systems and methods herein are for use with a Virtio standard and include at least one circuit to host a virtualizer component that enables different virtual machines (VMs), an accelerator driver, and a virtualizer interrupt moderation library, where the accelerator driver can be associated with an accelerator device. The accelerator driver can monitor interrupts associated with the different VMs to determine at least a current interrupt value and packet statistics that may be used by the virtualizer interrupt moderation library as a basis to provide an intended interrupt value to be used by the accelerator device for managing further interrupts to the different VMs.
Apparatuses, systems, and techniques for classifying a candidate uniform resource locator (URL) as a malicious URL using a machine learning (ML) detection system. An integrated circuit is coupled to physical memory of a host device via a host interface. The integrated circuit hosts a hardware-accelerated security service that obtains a snapshot of data stored in the physical memory and extracts a set of features from the snapshot. The security service classifies the candidate URL as a malicious URL using the set of features and outputs an indication of the malicious URL.
Embodiments of the present disclosure are directed to synchronizing clocks across a plurality of computing devices. Generally speaking, the clocks of the plurality of devices can be synchronized to whichever of the clocks is the furthest ahead in time. More specifically, embodiments provide for determining a common time reference establishment without need for an external reference. Rather, a computing device or node with the furthest ahead in time clock among devices or nodes in a group or time domain can be become the leader node and propagate time to the other nodes. Embodiments of the present disclosure can replace the traditional one-way time transfer from the IEEE 1588 timeTransmitter to the timeReceiver with two-way communication and time transfer.
An interconnect device is provided. In one example, an interconnect device includes ports and circuits to determine a rate of change in a number of entries in a queue exceeds a first threshold or falls below a second threshold. In response to the rate of change in the number of entries in the queue exceeding the first threshold or falling below the second threshold, a rate at which packets to be transmitted from the interconnect device are processed for egress may be reduced to avoid an excessive drop or rise in power consumption of the interconnect device.
Systems and methods herein are for a printed circuit board (PCB) having open circuitry and having a three-dimensional (3D) printed material that is deposited in a single process over at least one area of the PCB, where the 3D printed material may include at least a thermally-conductive material to enable at least one thermal conductive trace by the thermally-conductive material being over an electrically-insulating material of the 3D printed material and being over the open circuitry, and where the at least one thermal conductive trace can provide heat spreading from at least one hot area of the PCB to a remote area of the PCB.
A network interface device may include: a frame having: a first frame end, a second frame end, a top frame surface, a bottom frame surface, a first lateral frame surface and a second lateral frame surface, wherein the top frame surface includes a longitudinal frame indent extending along a portion of the top frame surface and between the first lateral frame surface and the second lateral frame surface; and heat dissipating members protruding from the longitudinal frame indent of the top frame surface.
Systems and methods are directed localized heating for a modulator incorporated into an electro-absorption modulated laser (EML). A heater may be positioned proximate one or more portions of the modulator to apply heat energy to the modulator responsive to an input. The heater may be configured to apply a dissipation of heat so that the modulator operates within a selected temperature range. The modulator and/or the heater may be thermally insulated, at least in part, from a substrate associated with the EML by one or more low thermal conductivity layers arranged between the modulator and a substrate of the EML.
Assemblies and methods are provided for optical switch assemblies. An example optical switch assembly includes a first switch member configured to support a first plurality of optical fibers. The first switch member is designed to selectively rotate about a first longitudinal axis. The assembly further includes a second switch member configured to support a second plurality of optical fibers. The second switch member defines a second longitudinal axis aligned with the first longitudinal axis. In a first position of the first switch member, a first subset of the first plurality of optical fibers is aligned with a first subset of the second plurality of optical fibers for transmitting optical signals therebetween. In a second position of the first switch member, a second subset of the first plurality of optical fibers is aligned with a second subset of the second plurality of optical fibers for transmitting optical signals therebetween.
Embodiments of the present disclosure are directed to utilizing a Host Channel Adapter (HCA) to facility low latency intra-node communications over a communications bus such as a Peripheral Component Interconnect express (PCIe) bus, for example. Generally speaking, hardware devices coupled with the communications bus can write short, “doorbell” messages to the HCA. The messages can indicate tasks to be performed by another device also coupled with the communications bus. The HCA in turn can write a Completion Queue Entry (CQE) based on the received message to a Completion Queue (CQ) of the other device. The other device can then read the CQE from the CQ and perform the indicated task.
An interconnect device is provided. In one example, an interconnect device includes ports and a control circuit capable of enabling and disabling control plane components based on received packets. Enabled control plane components enable the received packets to be forwarded to a destination via an egress port. Disabled control plane components enable a reduction in power consumption.
Systems, computer program products, and methods are described herein for allocation of network resources to execute AI workloads. An example system receives a data distribution task along with execution parameters, including a plurality of data portions and hosts. The system determines a plurality of points of delivery (PODs), each comprising switches with a defined radix (k), and couples the PODs to the hosts to configure a network structure optimized for AI workload execution. The system identifies at least one destination host for each source host based on the radix (k) and executes the data distribution task by transmitting data portions from each source host to the identified destination hosts through a corresponding subset of the PODs.
Systems, computer program products, and methods are described herein for allocation of network resources. An example system receives, from a user input device, a data distribution task with execution parameters that include a plurality of data portions and a plurality of hosts; determines a plurality of points of delivery (PODs), wherein the plurality of PODs comprises a plurality of switches, wherein each switch is associated with a radix (k); operatively couples the plurality of PODs to the plurality of hosts to configure a network structure; identifies at least one destination host for each source host based on at least the radix (k); and executes the data distribution task by transmitting respective portions of the plurality of data portions to from each source host to the at least one identified destination host via a corresponding subset of the plurality of PODs.
Systems and methods herein are for semiconductor product to include landing areas, where the landing areas may include at least one hot area to occur during operation of the semiconductor product, where the semiconductor product may also include heat spreader material from a transfer application, and wherein the heat spreader material may be conformal in the landing areas and can spread heat associated with the at least one hot area to at least one dissipation area of the semiconductor product.
Systems, computer program products, and methods are described for data communication. In an example, a data distribution task with execution parameters that include plurality of data portions and plurality of hosts is received from a user input device. A plurality of points of delivery (PODs) are determined, wherein the plurality of PODs comprises a plurality of switches, and the plurality of PODs are operatively coupled to the plurality of hosts to configure a network structure. At least one destination host is identified for each source host based on at least a number of communication hops required for traversal of data from each source host to the at least one destination host via a corresponding subset of the plurality of switches, and the data distribution task is executed by transmitting respective portions of the plurality of data portions from each source host to the at least one identified destination.
An optical chip configured for coupling to optical fibers and methods of manufacturing the same are provided. The optical chip includes a plurality of optical structures embedded within an optical routing layer; a first support layer; and a second support layer, bonded onto the exposed surface of the first support layer. The first support layer has a first etched mirror and a first v-groove aligned with the first etched mirror formed in an exposed surface of the first support layer. The second support layer has a second etched mirror and a second v-groove aligned with the second etched mirror formed in an exposed surface of the second support layer. The first etched mirror and the second etched mirror are optically aligned with a first optical structure and a second optical structure of the plurality of optical structures, respectively.
An optical chip including integrated active photonics and methods of manufacturing the same are provided. The optical chip includes optical structures embedded within an optical routing layer; and a support layer secured with respect to the optical routing layer. The support layer includes an etched mirror and a groove formed in an exposed surface of the support layer. The groove is aligned with etched mirror. Disposed within the groove are a first electrode, second electrode, and active region including gain material disposed between the first electrode and the second electrode such that applying a voltage difference between the first electrode and the second electrode causes the gain material to lase toward the etched mirror. The etched mirror is aligned with a respective optical structure of the plurality of optical structures such that the etched mirror directs lased light toward the respective optical structure for coupling into the optical routing layer.
Embodiments of the present disclosure are directed to efficient processes for recovering a hardware device in a computing system. Generally speaking, embodiments of the present disclosure include a two-part device recovery process. In a first part of the two-part process, an initial firmware image can be provided to the recovery device over a first, slower communication bus of the computing system. This initial image can provide the core for the device to communicate using a second, faster communications by of the computing system. A second part of the two-part process can then take place utilizing the second, faster communication bus.
G06F 11/07 - Responding to the occurrence of a fault, e.g. fault tolerance
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
Systems and Methods for Identifying Fiber Link Failures in an Optical Network Systems, computer program products, and methods are described herein for network discovery, port identification, and/or identifying fiber link failures in an optical network, in accordance with an embodiment of the invention. The present invention may be configured to sequentially connect each port of an optical switch to a network port of a server and generate, based on information associated with network devices connected to the ports, a network map. The network map may identify which network devices are connected to which ports of the optical switch and may permit dynamic port mapping for network installation, upgrades, repairs, and/or the like. The present invention may also be configured to determine a fiber link in which a failure occurred and reconfigure the optical switch to allow communication between an optical time-domain reflectometer and the fiber link to test the fiber link.
H04B 10/073 - Arrangements for monitoring or testing transmission systemsArrangements for fault measurement of transmission systems using an out-of-service signal
H04B 10/071 - Arrangements for monitoring or testing transmission systemsArrangements for fault measurement of transmission systems using a reflected signal, e.g. using optical time domain reflectometers [OTDR]
H04B 10/077 - Arrangements for monitoring or testing transmission systemsArrangements for fault measurement of transmission systems using an in-service signal using a supervisory or additional signal
H04B 10/079 - Arrangements for monitoring or testing transmission systemsArrangements for fault measurement of transmission systems using an in-service signal using measurements of the data signal
A system includes multiple devices, multiple processors and a cross-network bridge. The cross-network bridge includes a bus interface for connecting to a system bus, and bridging circuitry configured to translate between (i) system-bus transactions that are exchanged between one or more local devices and one or more remote processors among the processors, the one or more local devices being coupled to the system bus and served by the system bus, and the one or more remote processors being located across a network from the cross-network bridge, and (ii) data units that convey the system-bus transactions, for transmitting and receiving as network packets over the network to and from the remote processors.
Apparatus for cooling an electro-optical (EO) device, the apparatus includes (i) a housing configured to encapsulate components of the EO device, including: a first portion including a substrate for supporting the components, and a second portion configured to connect with the first portion, the second portion having opening facing the components and shelf surrounding the opening, and (ii) a cooling assembly, includes a base plate configured to be removably fitted in the opening of the second portion, array of multiple c-shaped elements arranged in a face-to-back configuration on a first surface of the base plate, and a second surface of the base plate opposite the first surface, the second surface configured to face the components when the base plate is fitted in the opening, the base plate configured to transfer heat between the components and the c-shaped elements and has higher thermal conductivity than the second portion of the housing.
Systems, computer program products, and methods are described herein for allocation of network resources for executing deep learning recommendation model (DLRM) tasks. An example system receives a task and an input specifying information associated with execution of the task, wherein the input comprises a plurality of hosts, determines a plurality of leaf switches based on the plurality of hosts, operatively couples each leaf switch to a subset of the plurality of hosts to configure a network structure; and triggers the execution of the task using the network structure.
H04L 41/0806 - Configuration setting for initial configuration or provisioning, e.g. plug-and-play
H04L 41/0816 - Configuration setting characterised by the conditions triggering a change of settings the condition being an adaptation, e.g. in response to network events
H04L 41/12 - Discovery or management of network topologies
50.
DYNAMIC FABRIC REACTION FOR OPTIMIZED COLLECTIVE COMMUNICATION
A networking device and system are described, among other things. An illustrative system is disclosed to include a congestion controller that manages traffic across a network fabric using receiver-based packet scheduling and a networking device that employs the congestion controller for data flows qualified as a large data flow but bypasses the congestion controller for data flows qualified as a small data flow. For example, the networking device may receive information describing a data flow directed toward a processing network; determine, based on the information describing the data flow, a size of the data flow; determine the size of the data flow is below a predetermined flow threshold; and in response to determining that the size of the data flow is below a predetermined threshold, bypass the congestion controller.
System, methods, and devices for sharing time information between machines are provided. In one example, a system includes a Precision Time Protocol (PTP) Hardware Clock (PHC) and an application. The application receives time information from the PHC along with contextual metadata associated with the time information, analyzes the contextual metadata associated with the time information, and determines a context in which the PHC is disciplined. The context in which the PHC is disciplined may control a manner in which the application uses the time information.
Approaches disclosed herein provide for the use of load balancers to perform tasks such as to queue traffic. In at least one embodiment, a plurality of parallel links queue traffic by, at least in part, representing a weight of the plurality of parallel links as a plurality of bit strings, where the plurality of bit strings are converted to a plurality of sparse bit strings, and where the plurality of sparse bit strings to be used to generate a representative vector of sequentially interleaved bits of the plurality of sparse bit strings. The traffic for the plurality of parallel links can be queued according to the representative vector. The load balancer may be used in a computer network to manage traffic between nodes connected by parallel links.
In one embodiments, a first device includes a first n-pulse-per-second (nPPS) output interface to be connected to a second nPPS input interface of a second device via a first clock connection, and to send a first pulse at time A to the second nPPS input interface for receipt at time B, and a first nPPS input interface to be connected to a second nPPS output interface of the second device via a second clock connection, to receive a second pulse at time D from the second nPPS output interface sent at time C, and to log time D in a first memory, and delay computation circuitry to compute a clock connection delay in the first clock connection and/or the second clock connection based on time A, time D, and a time difference between receiving the first pulse in, and sending the second pulse from, the second device.
A decompression apparatus includes a cache memory and a decoder. The decoder is to receive a compressed input data stream including literals and matches. Each literal represents a data value, and each match represents a respective sequence of literals by a respective offset pointing to a respective past occurrence of the sequence of literals. The decoder is to decompress the input data stream by replacing each match with the corresponding past occurrence, so as to produce an output data stream. In replacing a given match with the corresponding past occurrence, the decoder is to (i) when the offset indicates that the past occurrence is cached in the cache memory, retrieve the past occurrence from the cache memory, and (ii) when the offset indicates that the past occurrence is not contained in the cache memory, fetch the past occurrence from an external memory.
G06F 12/0802 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
H03M 7/30 - CompressionExpansionSuppression of unnecessary data, e.g. redundancy reduction
H03M 7/42 - Conversion to or from variable length codes, e.g. Shannon-Fano code, Huffman code, Morse code using table look-up for the coding or decoding process, e.g. using read-only memory
55.
SYSTEMS, METHODS, AND DEVICES FOR ENCRYPTED DATA TRANSFER
A network interface controller includes processing circuitry configured to pair with a local root of trust of a host device connected to the network interface controller and provide a key to an encryption device of the host device that enables the encryption device to encrypt data of one or more host device applications using the key. The encrypted data are stored in host device memory. The processing circuitry is configured to share the key with a remote endpoint and forward the encrypted data from the host device memory to the remote endpoint.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
56.
In-Band Transfer of Power-Consumption Allocations among Processing Devices
A power allocation method includes allocating power-consumption allocations to multiple processing devices. Available power allocations, which are offered for transfer to other processing devices, are reported by one or more over-allocated processing devices among the processing devices. Power demands, which are required by one or more under-allocated processing devices among the processing devices, are reported by the under-allocated processing devices. At least some of the available power allocations are transferred from one or more of the over-allocated processing devices to one or more of the under-allocated processing devices.
A controller includes an interface and a processor. The interface is to communicate with multiple processing devices. The processor is to determine respective amounts of available electrical power that are available in the processing devices for executing jobs, to select a group of one or more of the processing devices for executing a new job, based at least on (i) the amounts of available electrical power and (ii) an expected power demand needed for executing the new job, and to assign the selected group to execute the new job.
Technologies for optimizing performance of virtual switches in networking and accelerated computing are described. A virtual switch can identify an addition of a first data path (DP) rule in a flow table. The virtual switch can determine that the first DP rule and a second DP rule in the flow table overlap. The addition of the first DP rule causes the second DP rule to be deleted in the flow table. Before the second DP rule is deleted, the virtual switch can simulate receipt of a simulated packet comprising the specified portion of the network header corresponding to a second DP rule identifier of the second DP rule. The receipt of the simulated packet causes a third DP rule to be added to the flow table. After the third DP rule is added, the virtual switch can delete the second DP rule.
A network adapter includes a host interface and a scheduler. The host interface is configured to receive, from one or more hosts, packets for transmission to respective destinations over a network. The scheduler is configured to synchronize to a time-division schedule that is employed in the network, the time-division schedule specifying (i) multiple e time-slots and (ii) multiple respective groups of the destinations that are reachable during the time-slots, and, based on the time-division schedule, to schedule transmission times of the packets to the network on time-slots during which the respective destinations of the packets are reachable.
In some embodiments, a system includes a plurality of compute nodes, clock connections to connect at least some of the compute nodes and to distribute a master clock among the at least some compute nodes, and processing circuitry to discover a clock distribution topology formed by the compute nodes and the clock connections.
In one embodiment, a computing device includes a data interface to connect to and share data with configuration space registers of a silicon chip via an external physical interface of the silicon chip, a processing unit to execute a procedure to update parts of firmware source code configured to access the configuration space registers from a firmware processor embedded on the silicon chip via an internal physical interface of the silicon chip to access the configuration space registers via the data interface and via the external physical interface of the silicon chip, yielding modified firmware source code, and execute a software compiler to compile the modified firmware source code yielding compiled software.
A system comprises a first processing block configured to receive, from a first local resource, a formatted transaction in a format that is not recognizable by a remote endpoint; determine a first transaction category, from among a plurality of transaction categories, of the formatted transaction based on content of the formatted transaction; perform one or operations on the formatted transaction based on the first transaction category to form a reformatted transaction in a format that is recognizable by the remote endpoint; and place the reformatted transaction in a queue for transmission to the remote endpoint.
A device includes one or more ports, match-action circuitry, and an action processor. The one or more ports are to exchange packets between the device and a network. The match-action circuitry is to match at least some of the packets to one or more rules so as to set respective actions to be performed, at least one of the actions including a programmable action. The instruction processor is to perform the programmable action by running user-programmable software code. The match-action circuitry is to provide the instruction processor information for performing the programmable action.
A peripheral device includes a bus interface and an Address Translation Service (ATS) controller. The bus interface is to communicate over a peripheral bus. The ATS controller is to communicate over the peripheral bus, including sending address translation requests and receiving address translations in response to the address translation requests, to cache at least some of the address translations in one or more Address Translation Caches (ATCs), to estimate one or more statistical properties of the received address translations, and to configure the one or more ATCs based on the one or more statistical properties.
G06F 12/1027 - Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
G06F 12/0811 - Multiuser, multiprocessor or multiprocessing cache systems with multilevel cache hierarchies
G06F 12/0891 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using clearing, invalidating or resetting means
65.
VOLTAGE VARIATION SUPPRESSION USING PSRR BOOST IN LINEAR REGULATORS
Approaches disclosed herein provide for increasing the Power Supply Rejection Ratio (PSRR) of a linear regulator to reduce noise in voltage from a power source. In at least one embodiment, at least a portion of a voltage to be output provided from the linear regulator is identified to include noise and is received to a correction circuit. The correction circuit processes the received noise to reverse the polarity and add gain. The processed noise is provided back to at least a portion of the voltage and can be used to suppress the noise of the voltage as the PSRR of the linear regulator increases.
G05F 1/565 - Regulating voltage or current wherein the variable actually regulated by the final control device is DC using semiconductor devices in series with the load as final control devices sensing a condition of the system or its load in addition to means responsive to deviations in the output of the system, e.g. current, voltage, power factor
G05F 1/575 - Regulating voltage or current wherein the variable actually regulated by the final control device is DC using semiconductor devices in series with the load as final control devices characterised by the feedback circuit
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
66.
DIRECT CONTACT HEAT TRANSFER COUPLINGS FOR PLUGGABLE NETWORK INTERFACE DEVICES
A pluggable network interface device includes a printed circuit board (“PCB”), a housing, and a heatsink. The heatsink includes a first surface and a second surface, disposed opposite the first surface, that is maintained in direct contact with a surface of a heat-generating circuit of the PCB. The housing includes an outer shell defining an exterior of the housing, a receiving cavity disposed inside the outer shell, and an aperture extending through a first side of the outer shell from the exterior of the housing into the receiving cavity. A portion of the PCB and the second surface of the heatsink are disposed inside the receiving cavity while a portion of the heatsink extends from within the receiving cavity through the aperture arranging the first surface of the heatsink adjacent the exterior of the housing.
Some embodiments described herein provide a method for reducing congestion in a network using per-hop telemetry data and network adapters implementing such a method. As compared to conventional congestion control mechanisms, the method may include sharing telemetry data between flows on a per-hop basis to accelerate the convergence of congestion control efforts in a multi-flow network environment. In some embodiments, a destination network adapter may be configured to generate a response packet that includes all of the telemetry data for a packet without combining it, such that the response packet includes distinct telemetry data for each hop in the flow. A sender network adapter may be configured to parse the telemetry data in the response packet to determine the queue length and the link utilization for each hop in the flow through the network, which it then uses to determine a hop rate for each hop in the flow.
Systems, computer program products, and methods are described herein for allocation of network resources for executing large language model (LLM) tasks. An example system receives an LLM task and an input specifying information associated with execution of the LLM task, wherein the input comprises at least a parallelism parameter and a communication pattern; determines a plurality of hosts based on at least the parallelism parameter and the communication pattern; determines a plurality of switches based on the plurality of hosts; operatively couples the plurality of hosts to the plurality of switches to configure a network point of delivery (POD); and triggers execution of the LLM task using the network POD.
Apparatuses, systems, and methods are provided for optical communication with selective communication states. An example optical communication cable formed of one or more optical fibers includes a first end and a second end opposite the first end. The second end includes a first connector optically coupled with a first module, and a second connector optically coupled with a second module. The optical communication cable further includes a signal direction component optically coupled with the first end and the second end. The signal direction component is configured to switch between a first connection state in which optical connectivity is established between the first end and first connector and a second connection state in which optical connectivity is established between the first end and the second connector.
A peripheral device includes two or more peripheral-bus modules, a coherent interconnect, and two or more tunnel adapters coupled between the peripheral-bus modules and the coherent interconnect. The peripheral-bus modules are to exchange peripheral-bus packets with one another in accordance with a peripheral-bus protocol. The coherent interconnect is to connect electronic components of the peripheral device in accordance with a coherent interconnect protocol. The tunnel adapters are to convey the peripheral-bus packets between the peripheral-bus modules over the coherent interconnect, by translating between the peripheral-bus packets and messages of the coherent interconnect protocol.
Systems, devices, and methods are provided. In one example, a system is described that includes circuits to receive a packet associated with a destination from a source, determine a congestion associated with the destination, determine the congestion associated with the destination is outside a range, based on determining the congestion associated with the destination is outside the range, generate a notification packet, and send the notification packet to the source.
Apparatus, systems, and methods are provided that indicate the presence of optical connectivity in optical communication implementations. An example system includes a first optical transceiver and a first optical communication medium defining a first end connected with the first optical transceiver and a second end opposite the first end. The system further includes a first patch panel including one or more panel ports where a first panel port of the first patch panel connects with the second end of the first optical communication medium. The system also includes a first connection indication element that indicates the presence of an optical path between the first optical transceiver and the first patch panel in an instance in which the first optical transceiver is optically coupled with the first patch panel via the first optical communication medium.
A device, communication system, and method are provided. In one example, a system for switch generated explicit packet discard notifications is described that includes circuits to detect a packet is to be dropped; extract information from the packet to be dropped; and generate a notification using the extracted information from the packet to be dropped. The system also includes an interface to send the generated notification to a source associated with the packet to be dropped, wherein the notification is generated and sent from a switch.
In one embodiment, a clock syntonization system includes a first compute node including a first physical hardware clock to operate at a first clock frequency, a second compute node, and an interconnect data bus to transfer data from the first compute node at a data rate indicative of the first clock frequency of the first physical hardware clock, and wherein the second compute node includes clock synchronization circuitry to derive a second clock frequency from the data rate of the transferred data, and provide a clock signal at the derived second clock frequency.
H04L 7/027 - Speed or phase control by the received code signals, the signals containing no special synchronisation information extracting the synchronising or clock signal from the received signal spectrum, e.g. by using a resonant or bandpass circuit
Devices, apparatuses, systems, and methods are provided for improved thermal management in networking computing devices. An example thermal management apparatus includes a housing defining a first end and a second end opposite the first end. The apparatus further includes an electronic component supported within the housing, such as a GPU. The apparatus includes a primary inlet that receives a primary airflow having a first temperature and a secondary inlet that receives a secondary airflow having a second temperature where the second temperature is different than the first temperature. The primary airflow and the secondary airflow are collectively configured to dissipate heat generated by the electronic component.
In one embodiment, a network device includes a network interface to receive secured packets from a remote device over a packet data network, each of the secured packets being secured according to a security protocol and including a respective security protocol header and a Transmission Control Protocol (TCP) packet, which is encrypted according to the security protocol, a host device interface to connect the network device to a host device, and packet processing circuitry to decrypt each of the secured packets based on the respective security protocol header yielding multiple decrypted packets including decrypted TCP packets, aggregate the decrypted TCP packets into a single aggregated packet, and provide the single aggregated packet to software running on a processor of the host device via the host device interface.
Methods and devices are provided for aligning an optical connector with an optical component, such as a photonic integrated circuit (PIC). An example method includes providing an optical component comprising at least one optical port on a surface of the optical component, where the optical component includes a first correction device and a second correction device. The method further includes aligning an optical connector with the first correction device to achieve a first positional setting of the optical connector. The method may include aligning the optical connector with the second correction device to achieve a second positional setting of the optical connector. The optical connector may be moved a predefined distance while maintaining the first and second positional settings to bring the optical connector into an operative position with respect to the at least one optical port.
A system includes a memory device and a processing device, operatively coupled to the memory device, to perform operations including receiving a first hotspot temperature measurement and a second hotspot temperature measurement with respect to a hotspot. The first hotspot temperature measurement is based on a first temperature measurement received from a first thermal sensor, and a first thermal offset associated with the first thermal sensor. The second hotspot temperature measurement is based on a second temperature measurement received from a second thermal sensor, and a second thermal offset associated with the second thermal sensor. The operations further include determining, using at least the first and second hotspot temperature measurements, a generalized hotspot temperature measurement of the hotspot.
G01K 3/06 - Thermometers giving results other than momentary value of temperature giving mean valuesThermometers giving results other than momentary value of temperature giving integrated values in respect of space
G01K 1/02 - Means for indicating or recording specially adapted for thermometers
G01K 3/00 - Thermometers giving results other than momentary value of temperature
G01K 7/00 - Measuring temperature based on the use of electric or magnetic elements directly sensitive to heat
79.
In-Service Software Update Managed by Network Controller
A controller includes one or more ports and a processor. The one or more ports are to communicate with a network that includes multiple network devices. The processor is to receive, from a network device in the network, a request to perform a software update in the network device, to evaluate a permission condition in response to the request, to send to the network device a response granting the request when the permission condition is met, and to at least temporarily deny the request when the permission condition is not met. The network device issues the request in response to an instruction from a Network Management System (NMS) that is separate from the controller.
H04L 41/40 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
H04L 49/356 - Switches specially adapted for specific applications for storage area networks
80.
Confidential computing with device memory isolation
A confidential computing (CC) apparatus, including a CPU, to run a hypervisor that hosts one or more Trusted Virtual Machines (TVMs). The CC apparatus provides inter-TVM isolation and hardware isolation between the one or more TVMs and the hypervisor. The CPU is further to run a Device TVM (DTVM) including an interface to the network device; and a hypervisor interface which presents the DTVM to the hypervisor as a TVM, in a manner that the CC provides inter-TVM isolation and hardware isolation between the DTVM and the one or more TVMs and the hypervisor, as if the DTVM is a TVM. The DTVM is to receive from the hypervisor allocations of memory space in the external memory for a network device; and allocate the memory space in the external memory to the network device, in response to the hypervisor allocations.
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
G06F 13/28 - Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access, cycle steal
G06F 21/79 - Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure storage of data in semiconductor storage media, e.g. directly-addressable memories
81.
Mirror image of geometrical patterns in stacked integrated circuit dies
An electronic device includes first and second integrated circuit (IC) dies. The first IC die includes a first set of contact pads arranged, on a first surface of the first IC die, in a first geometrical pattern which is non-symmetrical under reflection about a given axis in a plain of the first IC die. The second IC die includes a second set of the contact pads that are arranged, on a second surface of the second IC die, in a second geometrical pattern that is a mirror image of the first geometrical pattern with respect to the given axis, the second surface of the second IC die is flipped about the given axis and facing the first surface of the first IC die, and the contact pads of the first and second sets are aligned with one another and mounted on one another.
H01L 25/065 - Assemblies consisting of a plurality of individual semiconductor or other solid-state devices all the devices being of a type provided for in a single subclass of subclasses , , , , or , e.g. assemblies of rectifier diodes the devices not having separate containers the devices being of a type provided for in group
G03F 1/20 - Masks or mask blanks for imaging by charged particle beam [CPB] radiation, e.g. by electron beamPreparation thereof
H01L 23/00 - Details of semiconductor or other solid state devices
82.
Prefetcher engine configuration selection with multi-armed bandit
In one embodiment, a system includes prefetcher engines to predict next memory access addresses of a memory from which to load data to a cache during execution of a software application, and load the data from the predicted next memory access addresses to the cache during execution of the software application, and a processor to control the prefetcher engines according to configurations of the prefetcher engines selected by a machine learning agent in exploration phases and in exploitation phases during execution of the software application, and execute the machine learning agent to select from a pruned set of configurations to control the prefetcher engines in the exploration phases, perform measurements on the system during execution of the machine learning agent, and execute the machine learning agent to select from the configurations to maximize potential rewards from controlling the prefetcher engines in the exploitation phases based on the performed measurements.
G06F 12/0862 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
G06F 12/0837 - Cache consistency protocols with software control, e.g. non-cacheable data
In one embodiment, a method includes finding an impact on performance of a device from changing settings of preprocessor engines applied to benchmark applications being executed by the device, defining groups of the preprocessor engines responsively to the impact on the performance of the device from changing the settings of the preprocessor engines, and providing different preprocessor engine configurations based on the settings to be applied to the preprocessor engines such that for each one of the defined groups a respective setting is to be applied equally to the preprocessor engines of the one group, thereby reducing a number of the preprocessor engine configurations available for selection by a machine learning agent.
In one embodiment, a system includes a processor to control a resource according to policies selected by a multi-armed bandit machine learning agent in exploration phases and in exploitation phases, and execute the multi-armed bandit machine learning agent to select from the policies to control the resource in the exploration phases according to probabilities to explore corresponding one of the policies, wherein the probabilities include different probabilities, perform measurements on the system during execution of the multi-armed bandit machine learning agent, and execute the multi-armed bandit machine learning agent to select from the policies to maximize potential rewards from controlling the resource in exploitation phases based on the performed measurements, and a memory to store data used by the processor.
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
G06F 9/30 - Arrangements for executing machine instructions, e.g. instruction decode
G06F 12/0862 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
A device, communication system, and method are provided. In one example, a system for routing traffic is described that includes a plurality of ports to facilitate communication over a network. The system also includes a controller to selectively activate or deactivate ports of the system based on queue depths and additional information to improve power efficiency of the system.
In one embodiment, a system includes a processor to receive machine learning training data including label scores based on measurements of device performance during execution of benchmark applications for different prefetcher engine configurations, and corresponding device hardware states, and train configuration specific machine learning regression models based on the received machine learning training data to provide corresponding configuration specific device performance predictions based on given device hardware states, and a memory to store data used by the processor.
In one embodiment, a method includes receiving data of a set of configurations of preprocessor engines, receiving measurements of performance of a device executing benchmark applications while changing a configuration of preprocessor engines selected from the set of configurations of preprocessor engines, defining an order of at least some of the configurations based on the measurements, and providing a pruned set of configurations based on the defined order of the at least some configurations.
G06F 12/0862 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
A device includes a first data storage circuit among multiple data storage circuits, an arbiter circuit coupled to the multiple data storage circuits, the arbiter circuit to determine that a value of the first data storage circuit has changed, an encoder circuit coupled to the arbiter circuit, the encoder circuit to encode first information pertaining to the value of the first data storage circuit that has changed into a virtual wire waveform, and a transmitter to transmit the virtual wire waveform in a mesh message over a mesh network.
Systems and methods are directed toward virtualizing network connections to transparently apply one or more connection policies responsive to features of a connection request. A network connection between two hosts may be established using a service associated with the respective hosts and connection information may be provided to an underlying hardware resources. When data transmission requests are received, the underlying hardware resource may determine the appropriate connection settings for data transmission without querying additional logs or hardware resources.
Systems and methods herein are for a video encoder to be associated with a rate optimization distortion (RDO) module and a calibration module, where the RDO module may be to perform RDO for received frames of a media stream and may be to generate at least an RDO output that is based in part on quality measures between the received frames and decoded frames, and where the calibration module may be to provide an evaluation metric that is to scale or transform at least a range of the quality measures, with the scaling or transforming to potentially reduce an effect on a compression performed in the video encoder.
H04N 19/147 - Data rate or code amount at the encoder output according to rate distortion criteria
H04N 19/157 - Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
H04N 19/172 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/30 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
H04N 19/42 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
H04N 19/60 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
91.
APPARATUS AND METHOD FOR IMPROVED NETWORK RESOURCE MANAGEMENT
Apparatus and method for improved network resource management are described herein. An example computing apparatus comprises a network adapter configured to: receive, via a network connection, a data packet from the communication network; determine, from the first memory block, a value of an extended portion of a local counter associated with the network connection in response to receiving the data packet; capture, from the second memory block, a value of a global counter; compare the value of the extended portion of the local counter with the value of the global counter; and in an instance in which the comparison identifies a mismatch: update the value of the extended portion of the local counter based on the value of the global counter; and set a current value of a bit indicating a status of the network connection, wherein the bit is associated with the plurality of bits.
Multipathing for session-based remote direct memory access (SRDMA) may be used for congestion management. A given SRDMA session group may be associated with multiple SRDMA sessions, each having its own unique 5-tuple. A queue pair (QP) associated with the SRDMA session group may provide a packet for transmission using the SRDMA session group. The SRDMA session group may enable the packet to be transmitted using any of the associated SRDMA sessions. Congestion levels for each of the SRDMA sessions may be monitored and weighted. Therefore, when a packet is received, an SRDMA session may be selected based, at least, on the weight to enable routing of packets to reduce latency and improve overall system efficiency.
H04L 47/122 - Avoiding congestionRecovering from congestion by diverting traffic away from congested entities
H04L 47/19 - Flow controlCongestion control at layers above the network layer
H04L 47/2408 - Traffic characterised by specific attributes, e.g. priority or QoS for supporting different services, e.g. a differentiated services [DiffServ] type of service
H04L 47/6295 - Queue scheduling characterised by scheduling criteria using multiple queues, one for each individual QoS, connection, flow or priority
93.
STATISTICAL HIGH BANDWIDTH AND PACKET RATE CROSSBAR WITH LOW CELL COUNT
An apparatus includes a crossbar circuit that routes one or more packets between one or more ingress domains and one or more egress domains. The crossbar circuit includes sub-crossbar domains. An ingress control circuit associated with the one or more ingress domains may distribute packet data of the one or more packets to the sub-crossbar domains. An egress control circuit of the apparatus receives data bits associated with the packet data from egresses associated with the plurality of sub-crossbar domains. The egress control circuit may reorder or refrain from reordering the data bits based on an attribute associated with the distribution of the packet data.
A system which facilitates efficient operation of plural agents, the system comprising a device which services the plural agents; and functionality which resides on the device and which provides a given quality of service, defined in terms of at least one resource, to at least one subset of agents from among the plural agents.
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
H04L 41/5003 - Managing SLAInteraction between SLA and QoS
H04L 41/5025 - Ensuring fulfilment of SLA by proactively reacting to service quality change, e.g. by reconfiguration after service quality degradation or upgrade
Devices, methods, and systems are provided. In one example, a device is described to include a device interface that receives data from at least one data source; a data shuffle unit that collects the data received from the at least one data source, receives a descriptor that describes a data shuffle operation to perform on the data received from the at least one data source, performs the data shuffle operation on the collected data to produce shuffled data, and provides the shuffled data to at least one data target.
A system includes a processing device and a peripheral device. The processing device is to assign a memory region in a memory. The peripheral device is to set a memory-access policy responsively to usage characteristics of the memory region, and to access data in the memory region using Direct Memory Access (DMA) in accordance with the memory-access policy.
G06F 12/0862 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
G06F 12/0831 - Cache consistency protocols using a bus scheme, e.g. with bus monitoring or watching means
G06F 13/16 - Handling requests for interconnection or transfer for access to memory bus
A network device, a network interface controller, and a switch are provided. In one example, a shared buffer includes a plurality of portions, one or more ports read data from the shared buffer and write data to the shared buffer, and a controller circuit correlates egress ports with available portions among the plurality of portions as close as possible to a respective egress port.
A device receives a packet from a local network. The packet may be directed toward a cloud computing resource. The device determines that the packet is associated with a new packet flow. In response to determining that the packet is associated with the new packet flow, the device provides one or more packets from the new packet flow to a machine learning model for packet inspection. The device receives an output from the machine learning model and routes the new packet flow based on the output received from the machine learning model. The output indicates whether or not the new packet flow is associated with a network attack.
Approaches disclosed herein provide for the use of load balancers to perform tasks such as to queue traffic. In at least one embodiment, a plurality of parallel links queue traffic by, at least in part, representing a weight of the plurality of parallel links as a plurality of bit strings, where the plurality of bit strings are converted to a plurality of sparse bit strings, and where the plurality of sparse bit strings to be used to generate a representative vector of sequentially interleaved bits of the plurality of sparse bit strings. The traffic for the plurality of parallel links can be queued according to the representative vector. The load balancer may be used in a computer network to manage traffic between nodes connected by parallel links.
A system for transmitting data is described, among other things. An illustrative system is disclosed to include one or more circuits to perform transmitting message-based data over packets. The circuits are capable of identifying a first message, transmitting a first portion of the first message in a first packet, the first packet including a bit indicating the first packet is message-based, and transmitting an end portion of the first message in a second packet, the second packet including a first bit indicating the second packet is message-based and a second bit indicating the second packet comprises the end portion of the first message.