SYSTEM AND METHOD FOR ADVANCED ANALYSIS OF EARNINGS CALL TRANSCRIPTS BASED ON ANALYSTS' BEHAVIOR AND QUESTION SENTIMENT WITH GENERATIVE QUESTION CAPABILITY
One example method includes preparing a document draft, performing an automated sentiment analysis on the document draft, performing an automated topic generation process based on the document draft and based on an outcome of the automated sentiment analysis, performing an automated question generation process based on the document draft and based on an outcome of the automated topic generation process, and the automated question generation process is performed for each of one or more consumers of the document draft, after consumption of the document draft by the consumers, performing another automated sentiment analysis process on the document draft using sentiments associated with input, concerning the document draft, provided by the consumers, and automatically updating a list of topics, generated by the automated topic generation process, based on contents of a transcript that includes the input provided by the consumers.
An information handling system includes a chassis, a filter, a force sensing resistor, and a baseboard management controller. The chassis includes a bezel. The filter is in physical communication with the bezel of the chassis. The force sensing resistor is in physical communication with the bezel and with the filter. The baseboard management controller communicates with the force sensing resistor and receives a voltage level from the force sensing resistor. Based on the voltage level, the baseboard management controller determines a clogged level for the filter. The baseboard management controller determines whether the clogged level is greater than a threshold level. In response to the clogged level being greater than the threshold level, the baseboard management controller determines that the filter needs to be replaced and provides a filter clogged message.
A user equipment may receive a discovery signal broadcast by a non-terrestrial radio node. The user equipment may determine a coverage level based on the discovery signal and may transmit the coverage level and other coverage level information in a discovery signal coverage level report to a terrestrial radio node. The terrestrial node may receive coverage level information from another terrestrial node. The terrestrial node may transmit an expected coverage level, based on coverage level information corresponding to the non-terrestrial node, to the user equipment in an expected coverage level report. The non-terrestrial node may halt transmission of the discovery signal during a discovery signal deactivation period. The user equipment may use the expected coverage level, and other information, received in the expected coverage level report to communicate with the non-terrestrial node during the deactivation period.
An information handling system collects telemetry data associated with an application, and processes the telemetry data of the application to derive a pattern. The system analyzes the telemetry data to identify test data and a test scenario based on the pattern, and generates a test case based on the test data and the test scenario.
A zone timing advance value corresponding to a zone associated with a non-terrestrial network node may be determined. A terrestrial network node facilitating delivery of traffic with respect to a user equipment may determine to offload a portion of the traffic for delivery via the non-terrestrial network node. The terrestrial node may request the zone timing advance value, from a core network or from the non-terrestrial node, and may transmit a path switch request message comprising the zone timing advance value to the user equipment. The path switch request message may indicate traffic flows for which traffic is to be offloaded to the non-terrestrial node and may comprise indication of uplink resources corresponding to the non-terrestrial node to be usable by the user equipment to transmit the offloaded uplink traffic to the non-terrestrial node.
A method comprises receiving a request to predict a plurality of scores for a plurality of satisfaction metrics for a product, wherein the request identifies a plurality of factors associated with the product. The request is input to a multiple output classification machine learning model. Using the multiple output classification machine learning model, the plurality of scores are predicted in response to the request. The multiple output classification machine learning model is trained with at least one dataset comprising historical product satisfaction data corresponding to respective ones of a plurality of products.
Methods and systems for managing distributed systems are disclosed. The distributed system may be managed by monitoring for overloaded data processing systems of the distributed system. If identified, workloads from the overloaded data processing systems may be migrated to other data processing systems that are not overloaded to improve the likelihood of timely generating results from the workloads. The results may be used in distributed processes that may require the results to be timely. If the results are not timely obtained, the distributed processes may be impacted.
Methods and systems for securing data are disclosed. The data may be secured by encrypting the data. The data may be sent from one edge device to another edge device. To secure the transmission of the data between edge devices, the selection of an encryption algorithm to encrypt the data may be based on progressive rules and other attributes. The progressive rules may be determined by an edge orchestrator and send to all edge devices. The other attributes may include identifiers of the one edge device and the other edge device and classification of the data.
A method comprises configuring a network protocol security engine to separately reside between a set of one or more first computing devices (e.g., clients) making one or more service requests (e.g., TCP requests) and a set of one or more second computing devices (e.g., web servers) responding to the one or more service requests. The network protocol security engine validates packets associated with the requests/responses to either accept or reject the requests/responses and thus prevent cyberattacks (e.g., flooding attacks) from adversely affecting resources of the set of one or more second computing devices.
An information handling system stores data collection policies. An embedded controller creates a data tolerance table and, based on the data collection policies, creates a timer associated with data collection of a first data type identified in one of the data collection policies. The embedded controller determines a data collection duration for the data collection of the first data type. In response to the timer no longer being needed for any of the data collection policies, the embedded processor disables the timer.
Managing power delivery, including receiving, at a first stage voltage regulator, an input power signal having a first voltage value; accessing a first table that indicates a correspondence between a parameter and voltage values of the input power signal; identifying a particular voltage value of the input power signal that is associated with the parameter; setting the voltage of the input power signal received by the first stage voltage regulator to the particular voltage value; accessing a second table indicating correspondence between the parameter and voltage values of an output voltage of an output power signal of the first stage voltage regulator; identifying a specific voltage value of the output voltage associated with parameter; setting the voltage of the output power signal output by the first stage voltage regulator to the specific voltage value; providing, by the first stage voltage regulator, the output power signal to second stage voltage regulators.
Described are systems and methods to eliminate reflection from a display's incident light that is captured by a camera, such as a webcam. The display includes a polarizer that polarizes light which is reflected by a medium such as glass to the camera. The reflected light includes linearly polarized light. A polarizer is strategically placed in front of the lens set of a camera. Utilizing an orientation perpendicular to the reflected incident light, reflection from the reflective surfaces is eliminated before it enters the lens set of the camera.
One example method includes generating a first test metric using an unknown dataset and second test metrics using shifted datasets that are shifted versions of a known dataset. A data distribution difference is determined between the unknown dataset and one of the shifted datasets that is closest to the unknown dataset. A determination is made if the data distribution difference is less than or equal to a first known threshold, and applying the data distribution difference to a correlation model to determine an estimated test metric difference. A test metric difference id determined between the first test metric and a second test metric associated with the one of the shifted datasets that is closest to the unknown dataset. A determination is made if a difference between the test metric difference and the estimated test metric difference is less than or equal to a second known threshold.
A radio access network node may receive, from a core network, traffic information corresponding to an extended reality traffic flow associated with an extended reality processing unit or an end extended reality appliance. The node may transmit to the processing unit an uplink resource grant configuration indicating a sharable uplink resource usable to transmit uplink traffic to the node. The processing unit may receive uplink traffic from the appliance and may relay the uplink traffic to the node. The processing unit may determine that continued relaying of the uplink traffic may violate a criterion and may schedule sharable uplink resource(s), indicated in the uplink resource grant configuration, for use by the appliance in transmitting the uplink traffic directly to the node. The processing unit may transmit an uplink resource sharing report to the node to facilitate the node avoiding blind decoding the uplink traffic received according to the scheduled resource(s).
One example method includes using an ensemble including machine learning (ML) forecasting models, determining respective relationships between past media spends and changes in search engine optimization (SEO) driven consumer demand for a product or service, ranking the relationships according to a criterion, based on the ranking, generating a forecast that comprises recommended future media spends, and effects expected to be achieved by those future media spends, and implementing the recommended future media spends.
An information handling system includes a storage and a basic input/output system (BIOS). The BIOS load and boot the base root file system. The BIOS determine a requested capability for the information handling system. The BIOS download an external root file system for the requested capability. The BIOS overlay the external root file system on the base root file system. The BIOS determine a job associated with the requested capability is completed. Based on the job being completed, the BIOS delete the external root file system.
A mounting assembly is provided for mounting a liquid cooling adaptor manifold to a server rack. The mounting assembly includes the liquid cooling adaptor, a mounting adaptor, and a mounting bracket. The liquid cooling adaptor provides a cooling liquid from a source via first connectors of a first type to a device via second connectors of a second type. The mounting adaptor is affixed to the liquid cooling adaptor manifold and to the mounting bracket. The mounting bracket is affixed to a side wall of a server rack.
Managing devices connected to an IHS, including performing, at a first time, a calibration and configuration of a device management model, including: identifying contextual data associated with a device coupled to a port of the IHS; training, based on the contextual data, the device management model, including generating a configuration policy including configuration rules, the configuration rules for performing computer-implemented actions to automatically adjust power provided to the device and functionality enablement of the device; performing, at a second time, a steady-state management of the device, including: monitoring the contextual data associated with the device; and in response, i) accessing the device management model including the configuration policy, ii) identifying one or more of the configuration rules based on the monitored contextual data, and iii) applying the one or more configuration rules to perform computer-implemented actions to automatically adjust power provided to the device and functionality enablement of the device.
A user equipment may receive from a network node a payload notification configuration comprising baseline notification priorities levels and notification priority increment values associated with identifiers of traffic flows. The user equipment may generate a status message, comprising status indications corresponding to payload corresponding to the traffic flows received by the user equipment, to be transmitted according to uplink control channel occasion resources. The user equipment may prioritize status indications in the status message according to baseline notification priorities indicated in the payload notification configuration and based on capacity of the occasion resources to accommodate the status indications. A baseline notification priority corresponding to a status indication not included in the status message may be increased by a notification priority increment such that the status indication is prioritized higher than another status indication, corresponding in the payload notification configuration to a higher baseline notification priority, in a next status message.
A computing environment, including a computing chassis including bays, the computing chassis including a first side and a second side; a first IOM positioned at the second side of the chassis; a second IOM positioned at the second side of the chassis; a plurality of storage devices, each storage device i) positioned within a respective bay of the bays and ii) coupled to both the first IOM and the second IOM; network devices, each network device positioned within a respective bay of the bays, each network device including network connectors positioned at a first side of the network device and network ports positioned at a second side of the network device opposite to the first side, each network device coupled to both the first IOM and the second IOM by respective network ports of the network device, wherein the network connectors are positioned at the first side of the chassis.
An extended reality processing unit and one or more extended reality appliances may share a joint access code. The processing unit may transmit the access code and identifiers corresponding to the processing unit and the appliances to a radio access network node. The node may configure the processing unit and the appliances with respective control channel search space resources and respective scrambling codes corresponding thereto. The node may receive traffic directed to one of the appliances. If transmission of control channel information corresponding to the traffic would violate a latency criterion corresponding thereto, the node may puncture resources, configured for use by the processing unit, occurring during a joint search space occasion, to transmit the control channel information to the appliance. The node may determine a control information format that fits in remnant portions of the punctured joint search space occasion to deliver control information to the processing unit.
H04W 72/232 - Control channels or signalling for resource management in the downlink direction of a wireless link, i.e. towards a terminal the control data signalling from the physical layer, e.g. DCI signalling
H04L 1/00 - Arrangements for detecting or preventing errors in the information received
A radio network node may receive from a core network a beam pattern configuration and a progressive paging configuration comprising one or more paging subpattern indications. The node may calculate beams of the subpatterns based on information in the beam pattern configuration. The node may transmit a paging message according to indicated subpatterns according to an order in the progressive paging configuration. The progressive paging configuration may comprise one or more wait periods corresponding to the one or more subpattern indications. The node may wait during a wait period associated with a subpattern after transmitting the paging message according to the subpattern. If a response is not received, the node may transmit the paging message according to a next-in-order subpattern. If a response to the paging message transmitted during a subpattern is received by the node, paging may be stopped and a connection with the user equipment may be established.
Techniques are provided for automated generation of software application test cases for evaluation of software application issues. One method comprises obtaining a first mapping of log event templates, related to log events in a software application log, to respective log event template vectors; obtaining a second mapping of test step vectors, generated using the log event template vectors, to respective test step functions, wherein a given test step vector comprises one or more of the log event template vectors; in response to obtaining information characterizing a software application issue: generating test step vector representations of the information characterizing the software application issue, using the first mapping; mapping the test step vector representations of the information to respective test step functions using the second mapping; and generating a test case logic flow to evaluate the software application issue using the mapped test step functions.
Example embodiments of the present disclosure provide a method, a device, and a computer program product for data query. The method includes selecting, according to a type of input data, a target pre-trained model from a deep network pool including a plurality of pre-trained models; performing, by using the selected target pre-trained model, feature extraction on the input data to determine text descriptors for the input data; and generating, based on the text descriptors, a query table for query. The method according to the present disclosure can select, according to different input data, different target pre-trained models from the deep network pool including the plurality of pre-trained models to process (e.g., compress) the input data. The method according to the present disclosure assembles a plurality of deep networks into a pool to automatically process data to obtain text descriptors for data retrieval, thereby achieving efficient data compression and retrieval.
An information handling system includes a storage and a baseboard management controller. The storage stores a current firmware release for the information handling system. The baseboard management controller generates a set of breakpoints for code differences from a previous firmware release to the current firmware release and executes multiple testcases for the current firmware release. Based on the execution of the testcases, the baseboard management controller stores breakpoint hitting events for the testcases in the storage. The baseboard management controller filters and ranks the breakpoint hitting events and maps the filtered and ranked breakpoint hitting events to the code differences. The baseboard management controller stores a reduced list of the code differences that are highly related to failed testcases of the current firmware release in the storage.
A method, computer program product, and computing system for identifying a change in a capacity of a cloud-deployed storage system. A capacity ratio and a IOPS ratio are determined for the cloud-deployed storage system. A portion of a cloud-deployed storage device is modified based upon, at least in part, one or more of the capacity ratio and the IOPS ratio. The portion of the cloud-deployed storage device is mapped to a portion of a logical storage device.
Digital twins for monitoring for server attacks in federated learning system are disclosed. A digital twin is intertwined with a central server and configured to generate acceptability distributions based on updates received from clients at the server in the federated learning system. The acceptability distributions, which may account for the probability of transmission failures, are used to identify anomalous behaviors, including anomalies in global gradient updates, server attacks and/or suspicious behavior.
A user equipment may receive from a network node a payload notification configuration comprising baseline notification priorities levels and notification priority increment values associated with identifiers of traffic flows. The user equipment may generate a status message, comprising status indications corresponding to payload corresponding to the traffic flows received by the user equipment, to be transmitted according to uplink control channel occasion resources. The user equipment may prioritize status indications in the status message according to baseline notification priorities indicated in the payload notification configuration and based on capacity of the occasion resources to accommodate the status indications. A baseline notification priority corresponding to a status indication not included in the status message may be increased by a notification priority increment such that the status indication is prioritized higher than another status indication, corresponding in the payload notification configuration to a higher baseline notification priority, in a next status message.
Embodiments of the present disclosure provide a system and method to simulate property values generated by Desktop Bus (D-Bus) objects. According to one embodiment, an Information Handling System (IHS) includes multiple D-Bus services that communicate among one another using a D-Bus with executable instructions to generate a D-Bus simulation service that communicates with one or more other D-Bus services through a D-Bus, and sends simulation test data to one of the other D-Bus services in response to a request from the one other D-Bus service.
A terrestrial radio network node, currently facilitating delivery of traffic with respect to a user equipment, may receive non-terrestrial capability information from the user equipment indicating a capability corresponding to the user equipment to transmit or receive traffic via a non-terrestrial node. The terrestrial node may receive from the user equipment measured parameter values indicative of performance with respect to the non-terrestrial node that may satisfy a quality-of-service criterion corresponding to traffic currently being delivered by the terrestrial node. The terrestrial node, based on the measured parameter values, may determine that the quality-of-service criterion may be satisfied by delivery of the traffic via the non-terrestrial node, and the terrestrial node may schedule non-terrestrial resources corresponding to the non-terrestrial node. The terrestrial node may indicate to the user equipment the scheduled non-terrestrial resources. The non-terrestrial node may deliver traffic with respect to the user equipment according to the scheduled non-terrestrial resources.
An apparatus comprises at least one processing device configured to identify an extended processing device and onboard the extended processing device to an edge infrastructure management platform, wherein the extended processing device is added as a component of a legacy device, and integrate the legacy device into the edge infrastructure management platform through one or more operations performed by the extended processing device.
One example method includes receiving, by a server from each client in a group of clients, metrics and parameters of local models relating to training of a model by the client, aggregating, by the server, the model parameters, determining, by the server using the metrics that have been sent, if a convergence criterion for the model has been met, and when the convergence criterion is determined not to have been met, calculating, by the server, a respective ε value for each of the clients, and transmitting, by the server to the clients, the respective ε values, and the ε values respectively indicate, to the clients, an extent to which the client should perform exploration, and/or exploitation, in a next training round for the model.
A motif based approach for subgraph matching in partially observed graphs is disclosed. Graphlets are extracted from a query graph, such as a graph of a workload. Motifs are built from the graphlets and the motifs are matched to a target graph, such as an infrastructure graph. Once the motifs are matched to nodes in the target graph, tasks of the workload, which correspond to nodes in the query graph, are placed in the infrastructure for execution.
An information handling system detects whether a text element in a graphical user interface is truncated by comparing the text element with an expected text element using a pattern-matching algorithm. The system also detects whether a non-text element in the graphical user interface is truncated by comparing the non-text element to a reconstructed copy of the non-text element.
Methods and systems for managing operation of a distribute system are disclosed. To manage the distributed system, a distributed ledger may be used to track the condition of the system. The distribute ledger may be managed in accordance with a consensus based approach. The consensus based approach may limit the impact of compromised entities by reducing the ability of the compromised entities from introducing malicious data into the data upon which management decision are made. Additionally, the distributed ledger may provide a shared understanding the condition of the distributed system across the distributed system.
Methods and systems for securing blueprints are disclosed. A blueprint may be secured by requiring sufficient privilege to implement the blueprint. The sufficient privilege may be obtained through an analysis of permissions of a blueprint user and the blueprint authors. An analysis of the permissions of the blueprint user and the blueprint authors may include reviewing privileges of the blueprint user and the blueprint authors. When the sufficient privilege may be found for the blueprint user and the blueprint authors, use of the blueprint may be permitted on an edge device.
Systems and methods for solving problems including combinatorial optimization problems are disclosed. A set of solutions to a combinatorial optimization problem are obtained from a quantum computing system such as a quantum annealer. A pattern mining operation is performed on a set of solutions output by a quantum annealing. The patterns are input to a solver to generate a solution to the initial problem.
Embodiments of the present disclosure provide a system and method to simulate hardware devices on an Information Handling System (IHS). According to one embodiment, an Information Handling System (IHS) includes executable instructions to receive, from an application, one or more first messages intended to be sent to a yet-to-be-developed hardware device using at least one software interface associated with the communication links, and generate one or more second messages that simulate a behavior of the missing hardware device. The second message may then be sent to the application to simulate the yet-to-be-developed hardware device.
Methods and systems for managing operation of endpoint devices are disclosed. The operation of the endpoint devices may be managed by requiring that the endpoint devices use a communication management framework to selective send communications via network and audio interfaces. Additionally, the communication management framework may cause the endpoint devices to update the manner in which audio communications are used over time to distribute data. The updates to the manner in which the audio communications are used over time may reduce the likelihood of a malicious entity from injecting malicious communications and/or snooping audio communications between endpoint devices.
G10L 19/02 - Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocodersCoding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
The technology described herein is directed towards puncturing physical downlink shared channel (PDSCH) data in a defined slot symbol, otherwise reserved for CORESET (control resource set) information, with physical downlink control channel (PDCCH) information. A scheduler schedules PDSCH data into the otherwise defined/reserved symbol. Network equipment (e.g., a gNodeB) then overwrites at least some of the PDSCH data in that symbol with the PDCCH information, resulting in punctured PDSCH data in that symbol. For transmitting, the coding rate/repetition pattern is adjusted by the network equipment such that a receiving user equipment is able to recover the overwritten PDSCH data during decoding. As a result, an entire symbol otherwise used only for PDCCH information is able to be used for PDSCH data.
Embodiments of the present disclosure provide a system and method to cache Desktop Bus (D-Bus) objects. According to one embodiment, an Information Handling System (IHS) includes a Remote Access Controller (RAC) with computer-executable instructions that cause the RAC to receive a request for a data object from a requester, and determine whether the data object is to be obtained from a cache or directly from the Desktop Bus (D-Bus) service. The instructions further cause the RAC to obtain the requested data object from either the D-Bus service or the cache based upon the determination, and send the obtained data object to the requester in response to the request. The data object is generated by a D-Bus service that communicates through the D-Bus.
Techniques for achieving improved checkpointing are disclosed. A library that has been implemented in a process is accessed. This library facilitates checkpoint-and-resume functionality to enable the process to checkpoint itself. Tags are used to annotate code of the process. The tags define which data of the process is to be saved in an event in which the process terminates in an unexpected manner. The tags define a block within the code and define the data that is to be saved. After the data has been saved, the process terminates unexpectedly. The process restarts from its beginning state and is progressed through its code. This progression includes skipping code for which data was previously saved. The process continues to progress and to skip through the code until reaching the defined block. At that point, the process resumes at the defined block, such that the process resumes at a user-defined location.
One example method includes receiving, at a control plane of a zero trust (ZT) architecture, a request to implement a proposed policy, forwarding the request to multiple policy engines of a blockchain policy engine, executing, by the policy engines, a consensus algorithm that decides whether or not the proposed policy will be implemented, wherein, as part of execution of the consensus algorithm, each of the policy engines performs a respective validation process with respect to the proposed policy, and when a consensus is reached by the policy engines, either implementing the proposed policy, or preventing implementation of the proposed policy, as dictated by the consensus.
An automatic namespace deletion system includes an automatic namespace deletion subsystem that is coupled to a storage system. The automatic namespace deletion subsystem receives a namespace creation instruction and, in response, creates a namespace in the storage system and sets a namespace deletion flag for the namespace. The automatic namespace deletion subsystem then stores data in the namespace in the storage system. Subsequent to storing the data in the namespace of the storage system, the automatic namespace deletion subsystem performs an initialization process and, during the initialization process, identifies the namespace deletion flag and, in response, deletes the namespace from the storage system.
A DC-SCM composable BIOS system includes a DC-MHS computing device having a DC-SCM device with a BIOS subsystem and a BIOS configuration database. During initialization operations for the DC-SCM device that are part of an initialization of the DC-MHS computing device, the DC-SCM device determines that the DC-SCM device requires a BIOS image to be composed in order to operate with the DC-MHS computing device, and identifies a DC-MHS computing device processing system configuration and a DC-MHS computing device hardware configuration of the DC-MHS computing device. The DC-SCM device then retrieves BIOS modules from the BIOS configuration database based on the DC-MHS computing device processing system configuration and the DC-MHS computing device hardware configuration of the DC-MHS computing device, and uses them to compose the BIOS image. The DC-SCM device then provides the BIOS image in the BIOS subsystem, and causes a second initialization of the DC-MHS computing device.
A DataCenter Secure Control Module (DC-SCM) as-a-service system includes a DataCenter Modular Hardware System (DC-MHS) computing device having a Host Processor Module (HPM) and a networking device that is coupled to the HPM and a network, and a DC-SCM provisioning system that is coupled to the networking device via the network. The DC-SCM provisioning system provides a networking device service instance for the networking device, receives a networking device DC-SCM functionality configuration for the networking device through the network via the networking device service instance, and configures at least one image to provide networking device DC-SCM functionality defined by the networking device DC-SCM functionality configuration. The networking device receives the at least one image via the network, and executes the at least one image to perform the networking device DC-SCM functionality with the HPM.
Optimizing lossy compression for classification models with unlabeled data is disclosed. In determining a compression quality, a global KL divergence threshold for input data is determined. If a divergence between the KL divergence of data and perturbed data is less than the global KL divergence threshold, a classifier will perform within a percentage of its original accuracy. A relationship between the compression quality {circumflex over (q)}nd the KL divergences of the compressed and decompressed data, after being processed by the classifier is determined. An optimal compression quality is determined based on the global KL divergence threshold and the relationship.
Systems and methods for mutual trust establishment among components of an Information Handling System (IHS) are described. In an illustrative, non-limiting embodiment, an IHS may include: at least one processor; and at least one memory coupled to the processor, wherein the at least one memory comprises program instructions stored thereon that, upon execution by the at least one processor, cause the at least one processor to: obtain a plurality of identifiers for a respective plurality of CPLDs or FPGAs, including an identifier for a first CPLD or FPGA; and provide to a second CPLD or FPGA an expected handshake token for the first CPLD or FPGA, wherein the expected handshake token is based, at least in part, on the identifier for the first CPLD or FPGA, and wherein the second CPLD or FPGA uses the expected handshake token to establish trust for communication with the first CPLD or FPGA.
A technique for transmitting data involves creating a shared hash set for sharing pattern data between a protocol layer and an input/output (IO) path layer, wherein the shared hash set stores pattern keys and pattern content for use in pattern detection. The technique further involves receiving, by the protocol layer, a pattern data block from a client, wherein the pattern data block comprises the pattern data and non-pattern data. The technique further involves transmitting, by the protocol layer, the pattern data block to the IO path layer. The technique further involves writing the pattern data block to a common block file system (CBFS) layer.
An extended reality processing unit and one or more extended reality appliances may share a joint access code. The processing unit may transmit the access code and identifiers corresponding to the processing unit and the appliances to a radio access network node. The node may configure the processing unit and the appliances with respective control channel search space resources and respective scrambling codes corresponding thereto. The node may receive traffic directed to one of the appliances. If transmission of control channel information corresponding to the traffic would violate a latency criterion corresponding thereto, the node may puncture resources, configured for use by the processing unit, occurring during a joint search space occasion, to transmit the control channel information to the appliance. The node may determine a control information format that fits in remnant portions of the punctured joint search space occasion to deliver control information to the processing unit.
A radio network node may receive from a core network a beam pattern configuration and a progressive paging configuration comprising one or more paging subpattern indications. The node may calculate beams of the subpatterns based on information in the beam pattern configuration. The node may transmit a paging message according to indicated subpatterns according to an order in the progressive paging configuration. The progressive paging configuration may comprise one or more wait periods corresponding to the one or more subpattern indications. The node may wait during a wait period associated with a subpattern after transmitting the paging message according to the subpattern. If a response is not received, the node may transmit the paging message according to a next-in-order subpattern. If a response to the paging message transmitted during a subpattern is received by the node, paging may be stopped and a connection with the user equipment may be established.
A radio access network node may receive, from a core network, traffic information corresponding to an extended reality traffic flow associated with an extended reality processing unit or an end extended reality appliance. The node may transmit to the processing unit an uplink resource grant configuration indicating a sharable uplink resource usable to transmit uplink traffic to the node. The processing unit may receive uplink traffic from the appliance and may relay the uplink traffic to the node. The processing unit may determine that continued relaying of the uplink traffic may violate a criterion and may schedule sharable uplink resource(s), indicated in the uplink resource grant configuration, for use by the appliance in transmitting the uplink traffic directly to the node. The processing unit may transmit an uplink resource sharing report to the node to facilitate the node avoiding blind decoding the uplink traffic received according to the scheduled resource(s).
A random read miss slot size selection engine is configured to select between multiple memory slot sizes to optimize slot size allocations for random read miss IO operations. Upon receipt of an IO operation that is a random read miss IO operation, the slot size selection engine obtains a metadata page encompassing multiple entries in addition to an entry associated with the random read miss IO operation. The slot size selection engine performs a metadata temporal analysis to analyze temporal information associated with previous slot allocations identified in the metadata page. The slot size selection engine also performs a metadata spatial analysis to spatially analyze previous slot allocations to neighboring tracks identified in the metadata page. In response to a determination that the metadata page contains a threshold number of recent slot allocations, the spatial analysis is used to determine the slot size to allocate to the random read miss.
A system claims ownership of a shared mailbox, and configures the shared mailbox as a redundant array of independent drives (RAID) controller. The system creates a RAID volume from at least one non-volatile memory express drive, and exposes the RAID volume during a boot process to a basic input/output system.
Methods, system, and non-transitory processor-readable storage medium for a test selection system are provided herein. An example method includes selecting, by a test selection system, a regression test case from a plurality of regression test cases in a software testing lifecycle system. The test selection system calculates a fault detection score for the regression test case, based on a fault detection decay rate score associated with the regression test case. The test selection system selects the regression test case for execution on a test system based on the fault detection score, and executes the regression test case on the test system.
A method, computer program product, and computing system for generating a plurality of trained machine learning models using a cloud computing system by training a plurality of machine learning models to forecast storage performance for one or more storage objects of a storage system, wherein the cloud computing system is separate from any storage system. A trained machine learning model is selected from the plurality of trained machine learning models to deploy to a target storage system. The trained machine learning model is deployed on the target storage system.
A server-based-Network Operating System (NOS) disaggregated switch device system includes a server device having a server chassis, a switch connector that is accessible on the server chassis, and a Central Processing Unit (CPU) system that is housed in the server chassis, that is coupled to the switch connector, and that is configured to provide a Network Operating System (NOS). Server/switch cabling is connected to the switch connector. A switch device includes a switch chassis, a server connector that is accessible on the switch chassis and that is connected to the server/switch cabling, and a Network Processing Unit (NPU) system that is housed in the switch chassis, that is coupled to the server connector, and that includes a memory subsystem that is configured to be provided with a plurality of switch tables via the server/switch cabling and by the NOS provided by the CPU system in the server device.
Techniques for correcting data drift of a language model are disclosed. A model is built, and this model is designed to solve a same task for which the language model has been trained. The model is applied to new input data. This application results in generation of a prediction comprising predicted label data. Context is stored in a context management structure (CMS). The context includes a prompt template, a prediction, and labeled input data used to train the language model. The data drift is determined to have occurred. This determination is performed by determining that the context is within a threshold level of similarity to a previously stored context. In response to determining that the data drift has occurred, an operation is performed to correct the data drift.
A method may include slicing a circuit board at multiple parallel cross-sections of the circuit board, each slice of the circuit board taken in a respective slice plane substantially non-perpendicular and substantially non-parallel to a surface of the circuit board.
A system determines whether an API request has a dependency prior to processing the API request. If the dependency is unmet, then a user may instruct the system to automatically resolve the dependency prior to processing the API request. The user may also resolve the dependency manually. If the dependency is resolved, then the system transmits a response based on the processing of the API request that includes information associated with the dependency.
The technology described herein is directed towards informing user equipment of which physical downlink control channel (PDCCH) locations in a slot symbol are allocated for (PDCCH) information. With the user equipment having this allocation information, a base station (e.g., gNodeB) scheduler can schedule unused PDCCH resource element group(s) to a UE for physical downlink shared channel (PDSCH) decoding. Further, because the allocation pattern is known to a UE, the UE need not blindly scan and decode all potential resource element groups in a slot, instead only decoding the PDCCH data in the allocated pattern to find the UE-specific information, and thereby proceed with PDSCH decoding. A defined identifier at a predefined symbol location informs the user equipment when the PDCCH allocation information is present. If not present, the UE blindly decodes all the resource element groups to find the UE-specific information to decode PDSCH data, as is currently done.
An adhesive tape may include a first adhesive strip having an adhesive material applied to the first adhesive strip on a first side of the adhesive tape, a second adhesive strip having the adhesive material applied to the second adhesive strip on the first side of the adhesive tape, the second adhesive strip substantially parallel with the first adhesive strip and separated from the first adhesive strip by a distance, and a non-adhesive region between the first adhesive strip and the second adhesive strip wherein the non-adhesive region is substantially free of adhesive on the first side of the adhesive tape.
A circuit board may include a plurality of electrically-conductive layers and a plurality of electrically-insulative layers laminated together with the plurality of electrically-conductive layers such that each of the plurality of electrically-insulative layers is located between adjacent layers of the plurality of electrically-conductive layers. A first electrically-conductive layer of the plurality of electrically-conductive layers may be patterned, during a process step in the manufacture of the circuit board, to include an electrically-conductive pattern patterned to be electrically coupled to an electrically-conductive network within the circuit board and a nonfunctional pad comprising a portion of conductive material patterned from the first electrically-conductive layer but electrically decoupled from the electrically-conductive pattern. A first electrically-insulative layer of the plurality of electrically-insulative layers may be laminated over the first electrically-conductive layer such that resin of the first electrically-insulative layer fills a void between the electrically-conductive network and the nonfunctional pad.
A method, computer program product, and computing system for generating a plurality of artificial storage devices for a storage system, wherein each artificial storage device includes a defined storage capacity. A total useable storage capacity for the storage system is defined based upon, at least in part, the defined storage capacity for each artificial storage device and a storage capacity associated with a plurality of physical storage devices. One or more input/output (IO) requests are processed on the storage system. An IO request concerning an artificial storage device of the plurality of artificial storage devices is discarded.
Up front authorization of a workflow and a security context for workflow execution are disclosed. All possible authorizations that may be required by a workflow are identified up front. A requestor is allowed to execute the workflow only when the authorizations of the user include the authorizations that may be required by the workflow. A security context is generated and associated with the workflow or an instance thereof. The security context scopes or limits the workflow to at least the type or capacity of work requested, work uniquely identified in the security context, and/or service/workflow/call paths that the request is allowed to be processed through.
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
G06F 21/54 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by adding security routines or objects to programs
67.
VIRTUALIZATION OF SYNCHRONIZATION PLANE FOR RADIO ACCESS NETWORK WITH FRONTHAUL PACKET DELAY VARIATION
Described is synchronization of radio units (RUs) with a reference time and frequency using a reference radio unit (reference RU) without the need for boundary-clock-based synchronization. A synchronized RU transmits a reference signal that is received at the reference RU and used to evaluate its time and frequency. A processing unit coupled to the reference RU evaluates waveform sample data, corresponding to the received signal, with reference waveform data and returns feedback, e.g., frequency error data, time error data, and quality information. The frequency error data and time error data are used to synchronize the RU. Messages between the DU and RU (e.g., via a fronthaul interface) indicate signal start time, stop time and frequency for transmitting by the RU, and for the RU to correct its time and frequency based on the error data. Messaging between the DU and the processing unit coordinates the reference waveform data for evaluation.
Techniques are disclosed for secure edge computing network management in information processing systems. For example, a processing platform comprises at least one processor coupled to at least one memory and is configured to determine that a given edge node has joined an edge computing network comprising a plurality of edge nodes. The processing platform is further configured to determine that security data associated with at least one of the plurality of edge nodes is suitable for the given edge node. The processing platform is further configured to cause a transfer of the security data from the at least one of the plurality of edge nodes, determined to be suitable for the given edge node, to the given edge node.
In at least one embodiment, processing can include: receiving a request to create identical snapshots of volumes V1 and V2 configured as a stretched volume; and in response to receiving the request, performing first processing including: holding write acknowledgements for the stretched volume; tracking writes to the stretched volume; creating a first snapshot of V1 on a first system and a second snapshot of V2 on a second system; stopping tracking of writes to the stretched volume; resuming write acknowledgements for the stretched volume; determining tracked writes for the stretched volume; determining a set of locations corresponding to the tracked writes; selecting the first snapshot of V1 as a master copy; determining data changes corresponding the set of locations from the master copy; replicating the data changes from the first system to the second system including the second system; and applying the data changes to the second snapshot of V2.
A method for sizing backup infrastructure. The method includes: receiving a sizing request including an asset protection policy covering an asset of the backup infrastructure, the asset protection policy at least specifying the asset; based on receiving the sizing request: creating an asset snapshot of the asset; mounting the asset snapshot to obtain a mounted asset snapshot through which asset snapshot data of the asset snapshot is accessible; partitioning the asset snapshot data into a plurality of asset snapshot data slices; computing, at least based on a cardinality of the plurality of asset snapshot data slices, a number of proxy nodes of the backup infrastructure required to collectively perform a prospective backup operation entailing the asset snapshot data; and providing, in reply to the sizing request, a sizing response at least specifying the number of proxy nodes required to collectively perform the prospective backup operation.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
71.
ATTACK PREVENTION FOR TRANSMISSION CONTROL PROTOCOL LAYER
A method comprises receiving one or more data packets corresponding to at least one communications protocol request, and scanning the one or more data packets to validate one or more elements corresponding to the at least one communications protocol request. The at least one communications protocol request is rejected in response to invalidating the one or more elements, and the at least one communications protocol request is forwarded to one or more servers in response to validating the one or more elements.
A method comprises monitoring operation of one or more devices of an edge platform, collecting data corresponding to the operation of the one or more devices, and transmitting the data corresponding to the operation of the one or more devices over at least one communications network, via a first server, to a second server. The steps of the method are executed by a processing device operatively coupled to a memory. The processing device is a component of a network switch located in the edge platform.
An inventory system is provided for inventorying inventory items on a plurality of information handling systems. The inventory system includes the plurality of information handling systems and an inventory manager. Each information handling system includes a plurality of inventory items. The inventory manager manages an inventory object, and collects an inventory of the inventory items based on the inventory object. The inventory object identifies inventory information related to the inventory items that are to be collected in the inventory, and identifies execution information related to the collection of the inventory.
A system can maintain a group of data processing units and a storage array that comprises a group of sub-logical unit numbers of storage. The system can collect, by a central processing unit, first data indicative of input and output events for the storage array. The system can process, by respective data processing units, respective autoregressive integrated moving average models for respective sub-logical unit numbers of the group of sub-logical unit numbers with the first data, to generate respective statuses that indicate respective frequencies of access of the respective sub-logical unit numbers. The system can determine, by the central processing unit, respective classifications for respective sub-logical unit numbers of the group of sub-logical unit numbers of storage based on the respective statuses. The system can compress, by a compression engine, second data stored in at least some of the respective sub-logical unit numbers based on the respective classifications.
Disclosed information handling systems and methods employ machine learning to provide and support dynamic anomaly detection algorithms trained in accordance with telemetry independent data (TID) to improve anomaly detection accuracy and reduce alert fatigue associated with false-positive anomaly determinations. In at least some embodiments, TID may encompass user-provided data, including enterprise profile data indicative of attributes of the enterprise's business, and external factor data, indicating external events or conditions with the potential to impact many or all enterprises located in proximity to the event or condition.
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
76.
SYSTEMS AND METHODS FOR DETECTING PRESENCE OR ABSENCE OF A COMPONENT ON A CIRCUIT BOARD
Systems and methods for detecting the presence or absence of a component on a circuit board are described. In an illustrative, non-limiting embodiment, an Information Handling System (IHS) may include: a processor; and a memory coupled to the processor, where the memory includes program instructions store thereon that, upon execution by the processor, cause the IHS to: obtain current data from a current monitor, wherein the current monitor monitors current drawn by one or more monitored components of the IHS; and determine based, at least in part, on the current data, a presence or an absence of a required component of the one or more monitored components, or a presence of an unexpected component of the one or more monitored components.
Methods and systems for managing operation of endpoint devices are disclosed. The operation of the endpoint devices may be managed by deploying containers to the endpoint devices. The containers may include applications and/or other components. The applications may provide various desired services. The containers may also limit use of host endpoint devices based on activity profiles for the requestors of services provided by the applications and the services provided by the applications. The activity profiles may be used on historical information regarding similar requestors and similar services. At least some of the containers may be nested and may separately apply different sets of limits.
H04L 41/0895 - Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
H04L 41/0826 - Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability for reduction of network costs
78.
CUSTOMIZATION OF APPLICATION PROGRAMMING INTERFACES
Methods and systems for managing distribution of data in a distributed system are disclosed. The data may be distributed by application programming interfaces that provide access to data included in database or other types of data structures. The application programming interfaces may allow custom resources to be defined and used. The use of custom resources may reduce the overhead for obtaining data by allowing desired data to automatically be provided in response to invocation of functionality of the application programming interfaces.
A system can receive a message from a base station configured to communicate via a group of antenna ports, the message being indicative of conducting broadband cellular communications with the base station according to a group of differing demodulation reference signal densities, wherein respective demodulation reference signal densities of the group of differing demodulation reference signal densities correspond to respective antenna ports of the group of antenna ports. The system can use the group of differing demodulation reference signal densities to further communicate the broadband cellular communications with the base station.
A system can receive a message from a base station configured to communicate via a group of antenna ports, the message being indicative of conducting uplink communications of broadband cellular communications with the base station according to a first group of differing demodulation reference signal densities, wherein respective demodulation reference signal densities of the group of differing demodulation reference signal densities correspond to respective antenna ports of the group of antenna ports, and wherein communicating the broadband cellular communications for uplink communications according to the first group of differing demodulation reference signal densities is performed independently of a second group of demodulation reference signal densities that is configured for downlink communications. The system can use the group of differing demodulation reference signal densities to further communicate the broadband cellular communications with the base station.
Embodiments of the present disclosure relate to a method, an electronic device, and a computer program product for generating an image. The method includes acquiring a semantic segmentation graph by performing semantic segmentation on a source image. The method further includes acquiring a key word for describing a feature of a to-be-generated target image. The method further includes transforming the semantic segmentation graph by using the key word so as to acquire a transformed semantic segmentation graph. The method further includes generating the target image based on the transformed semantic segmentation graph. According to the method of embodiments of the present disclosure, a semantic segmentation graph of a source image and a key word can be used to generate a target image, so as to make the generated target image have a target feature and have semantic consistency with the source image, thereby generating a high-quality target image.
G06V 10/86 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using syntactic or structural representations of the image or video pattern, e.g. symbolic string recognitionArrangements for image or video recognition or understanding using pattern recognition or machine learning using graph matching
Embodiments of the present disclosure relate to a method, an electronic device, and a computer program product for constructing training data. The method includes determining multiple clusters by clustering prompts in a training dataset; and determining, based on multiple cohesion levels of the multiple clusters, multiple sampling probabilities corresponding to the multiple clusters, where the cohesion levels indicate intra-cluster distances in the clusters. The method further includes determining, according to the multiple sampling probabilities, a target cluster for sampling. The method further includes constructing target training data by sampling target prompts from the target cluster. According to embodiments of the present disclosure, when fine-tuning a language model, prompts can be screened according to a clustering result of the prompts, so as to make the determined prompts more valuable for annotation, thereby ensuring output results of the language model obtained by training to be comprehensive and diverse.
A display device which includes a display component; a display frame; an embedded camera system, the embedded camera system including a camera component, the camera component being physically coupled to the display frame to embed the camera component within the display device and, an integrated shutter component, the integrated shutter component being physically coupled to the display frame, the integrated shutter component and the display frame providing a shutter lateral guidance system, the shutter lateral guidance system performing an integrated shutter lateral guidance operation.
G03B 17/48 - Details of cameras or camera bodiesAccessories therefor adapted for combination with other photographic or optical apparatus
G03B 9/38 - Single rigid plate with multiple slots or other apertures
G03B 30/00 - Camera modules comprising integrated lens units and imaging units, specially adapted for being embedded in other devices, e.g. mobile phones or vehicles
84.
SUSTAINABLE SYSTEM AND METHOD OF USER REPAIR AND UPGRADE FOR LAPTOP HARDWARE COMPONENTS
A base chassis assembly for an information handling system for replacement of power and input/output (IO) connectors or a battery may comprise a base chassis top cover joined to a base chassis bottom cover and including a replaceable IO and power pin/port connector module and a removable palm rest and touchpad assembly enclosing a battery disposed within base chassis assembly. A connector module circuit board operably connected to power and IO connectors in the replaceable IO and power pin/port connector module slidable to slide the power and IO connectors into sidewall apertures of the base assembly upon installation of the replaceable IO and power pin/port connector module in the base chassis assembly and the removable palm rest and touchpad assembly removable with removal of at least one fastener to access and service the battery providing for simple servicing or replacement of parts for the information handling system.
H01R 43/26 - Apparatus or processes specially adapted for manufacturing, assembling, maintaining, or repairing of line connectors or current collectors or for joining electric conductors for engaging or disengaging the two parts of a coupling device
H05K 5/00 - Casings, cabinets or drawers for electric apparatus
85.
MANAGING CACHES USING ACCESS INFORMATION OF CACHE PAGES
Improved techniques are directed to managing a cache in an electronic environment in which a first processing core is configured to utilize a first set of queues to reclaim the pages of the cache and a second processing core is configured to utilize a second set of queues to reclaim the pages of the cache. The techniques include adding, to a queue in the first set of queues, an entry identifying access information of a page of the cache. The techniques further include accessing the page by a second processing core. The techniques further include, while the entry is in the first set of queues, updating the access information by the second processing core to indicate accessing the page by the second processing core.
G06F 12/123 - Replacement control using replacement algorithms with age lists, e.g. queue, most recently used [MRU] list or least recently used [LRU] list
A portable information handling system keyboard is tested for wear to determine reuse or recycling by subjecting the information handling system to visual inspection performed by a camera that includes an evaluation of key wear based in part on a keyboard language in a key lattice. The key lattice couples to a keyboard frame over a membrane having contact sensors and extends plural keyboard language members towards the membrane that identify by contact against the membrane the language of the plural keys in the keyboard lattice.
G06F 3/023 - Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
G06F 3/02 - Input arrangements using manually operated switches, e.g. using keyboards or dials
87.
INFORMATION HANDLING SYSTEM BATTERY SWELL DETECTION BY COVER DECK STRESS
A portable information handling system includes a battery having a battery swell detection sensor to detect predetermined battery swell. In one embodiment, a conductive gasket on the battery upper surface during battery swell presses against sensor on a bottom side of a housing cover to indicate battery swell. In another embodiment, a strains gauge of a resistive foil having first and second terminal detects battery swell that introduces strain to the strain gauge that increases resistance to current passing between the first and second terminals. A processing resource of the information handling system executes instructions to detect the battery swell and to discard battery swell indications associated with inputs at a keyboard or touchpad coupled to the housing cover.
A portable information handling system hinge is tested for wear to determine reuse or recycling by subjecting the information handling system to an acceleration and capturing a visual image of hinge movement in response to the acceleration. Excessive hinge wear is detected when housing movement exceeds a threshold distance or fails to achieve threshold dampening effect. After detection of hinge wear, when a hinge has sufficient torque for use in a selected of plural information handling system platforms, the hinge is reused, otherwise the hinge is broken down and recycled.
A keyboard operatively couplable to an information handling system may include a keyboard chassis with a plurality of physical keyswitch actuation devices at a plurality of key locations. The keyboard may also include a keyboard key operatively coupled to the keyboard chassis at a first key location via a key link bar. The key link bar may include a bent terminal end operatively coupled to the keyboard key via side hourglass hook on the keyboard key. The key link bar includes a noise-reducing coating formed over the key link bar to reduce noise associated with actuation of the keyboard key during actuation by a user except at c-clip interface sections.
H01H 13/84 - Switches having rectilinearly-movable operating part or parts adapted for pushing or pulling in one direction only, e.g. push-button switch having a plurality of operating members associated with different sets of contacts, e.g. keyboard characterised by ergonomic functions, e.g. for miniature keyboardsSwitches having rectilinearly-movable operating part or parts adapted for pushing or pulling in one direction only, e.g. push-button switch having a plurality of operating members associated with different sets of contacts, e.g. keyboard characterised by operational sensory functions, e.g. sound feedback
G06F 3/02 - Input arrangements using manually operated switches, e.g. using keyboards or dials
H01H 13/88 - Processes specially adapted for manufacture of rectilinearly movable switches having a plurality of operating members associated with different sets of contacts, e.g. keyboards
90.
DOWNLINK CONTROL INFORMATION TO SCHEDULE DOWNLINK DATA ON PHYSICAL CHANNEL AND ENHANCE CONTROL CHANNEL DECODING
The technology described herein is directed towards informing user equipment of which physical downlink control channel (PDCCH) locations in a slot symbol are allocated for (PDCCH) information. With the user equipment having this allocation information, a base station (e.g., gNodeB) scheduler can schedule unused PDCCH resource element group(s) to a UE for physical downlink shared channel (PDSCH) decoding. Further, because the allocation pattern is known to a UE, the UE need not blindly scan and decode all potential resource element groups in a slot, instead only decoding the PDCCH data in the allocated pattern to find the UE-specific information, and thereby proceed with PDSCH decoding. A defined identifier at a predefined symbol location informs the user equipment when the PDCCH allocation information is present. If not present, the UE blindly decodes all the resource element groups to find the UE-specific information to decode PDSCH data, as is currently done.
Methods, apparatus, and processor-readable storage media for generating context-based and user-related predictions using artificial intelligence techniques are provided herein. An example computer-implemented method includes determining one or more user parameters by processing information related to a user in association with at least one web application; determining context information associated with the user accessing one or more portions of the at least one web application; generating one or more predictions associated with future use of the at least one web application by the user by processing at least a portion of the one or more user parameters and at least a portion of the context information using one or more artificial intelligence techniques; and performing one or more automated actions based at least in part on the one or more predictions associated with future use of the at least one web application by the user.
Methods, apparatus, and processor-readable storage media for implementing synchronization systems for ALM are provided herein. An example computer-implemented method includes identifying ALM tools in conjunction with at least one software application development task; establishing a connection between at least a first and at least a second of the ALM tools using one or more APIs associated with the at least a first ALM tool and the at least a second ALM tool; determining data mapping rules and data transformation rules associated with the at least a first ALM tool and the at least a second ALM tool; and synchronizing data, related to the at least one software application development task, from the at least a first to the at least a second ALM tool, via the connection and in accordance with at least a portion of the data mapping rule(s) and the data transformation rule(s).
G06F 16/25 - Integrating or interfacing systems involving database management systems
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
93.
DETERMINING CONFIGURABLE COMPONENT PARAMETERS USING MACHINE LEARNING TECHNIQUES
Methods, apparatus, and processor-readable storage media for determining configurable component parameters using machine learning techniques are provided herein. An example computer-implemented method includes forecasting demand data for at least one component in connection with one or more temporal periods by processing component-related data using one or more machine learning techniques; determining information pertaining to one or more modifications associated with the at least one component; determining, by processing at least a portion of the demand data and at least a portion of the information pertaining to the one or more modifications using at least one designated algorithm, one or more configurable component parameter values attributed to the at least one component and at least a portion of the one or more modifications; and performing one or more automated actions based at least in part on at least one of the one or more configurable component parameter values.
A portable information handling system includes a battery having a pattern that aids in detection of battery swell, such as by capture of a visual image of the battery and pattern with a camera for comparison against a threshold of the pattern distortion that indicates excessive battery swell. For instance, the pattern is parallel lines of a strain gauge that detects strain introduced by battery swell. Distortion of the parallel lines by expansion of the battery due to swelling is detected by changes in distance between the parallel lines and confirmed with distances measured between the camera and the pattern by an infrared depth camera.
A portable information handling system keyboard is tested for wear to determine reuse or recycling by capturing visual images of the information handling system and comparing color, material and finish wear with color, material and finish thresholds. Excessive keyboard wear is detected when housing when the image show excessive degradation from use that impacts potential reuse of the keyboard. Keyboard key wear is checked in part based upon a vertical position of the keyboard keys relative to a palm rest of the information handling system. Keyboard backlight illumination is checked to find any weak lights and is altered to highlight key wear for detection by the camera.
Techniques for optimizing a remote data copy using multiple storage nodes. The techniques include receiving a copy command for a volume at a first node, a first subset of VOL slices being owned by the first node, and a second subset of VOL slices being owned by a second node. The techniques include obtaining, by the first node, respective diff bitmaps for the first subset of slices and the second subset of slices. The techniques include sending the diff bitmap for the second subset of slices to the second node. The techniques include performing, by the first node, a first copy operation based on the diff bitmap for the first subset of slices, and performing, by the second node, a second copy operation based on the diff bitmap for the second subset of slices, the second copy operation being performed at least partially in parallel with the first copy operation.
A technique is directed to sharing a page of memory among a first processing node and a second processing node. The technique includes provisioning the first processing node with a first queue and the second processing node with a second queue. The technique further includes configuring the first processing node to enqueue, within the first queue, local lock requests to assign lock ownership of the page to the first processing node and peer lock requests to assign lock ownership of the page to the second processing node. The technique further includes configuring the second processing node to enqueue, within the second queue, the peer lock requests to provide lock ownership coordination of the page among the first processing node and the second processing node.
Carvalho Alves, Sarah Hannah Lucius Lacerda De Góes Telles
Pinho, Rômulo Teixeira De Abreu
Abstract
Techniques for performing voice synthesis from diffusion generated spectrograms are disclosed. A set of curated audio samples is accessed. A set of spectrograms is also accessed. These spectrograms are based on the set of curated audio samples. A synthetic spectrogram is generated by feeding, as input, the set of spectrograms into a diffusion model, which generates the synthetic spectrogram. An audio file is then generated. The audio file is representative of a synthetic voice. The audio file is generated by feeding, as input, the synthetic spectrogram and input text to a text-to-speech model, which generates the audio file.
G10L 13/027 - Concept to speech synthesisersGeneration of natural phrases from machine-based concepts
G10L 13/033 - Voice editing, e.g. manipulating the voice of the synthesiser
G10L 13/04 - Details of speech synthesis systems, e.g. synthesiser structure or memory management
G10L 13/08 - Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
G10L 15/06 - Creation of reference templatesTraining of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
G10L 25/18 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
In one or more embodiments, a thin heatpipe may comprise a tube with a wick formed as a plurality of longitudinal ridges, each ridge in contact with a first portion of an inner surface corresponding to a first side of the tube and extending to contact a portion of the inner surface associated with an opposite side of the tube. The ridges may divide the interior of the tube into a plurality of vapor cavity areas.
F28D 15/04 - Heat-exchange apparatus with the intermediate heat-transfer medium in closed tubes passing into or through the conduit walls in which the medium condenses and evaporates, e.g. heat-pipes with tubes having a capillary structure
F28D 21/00 - Heat-exchange apparatus not covered by any of the groups
100.
RELATIONSHIP-BASED DATA STRUCTURE GENERATION FOR ENHANCED VISUALIZATION OF INFORMATION RELATED TO SOFTWARE PRODUCTS OBTAINED FROM MULTIPLE DATA SOURCES
An apparatus comprises at least one processing device configured to obtain data associated with software products from a plurality of data sources, to identify association between portions of the data and respective ones of the software products, and to determine relationships between different subsets of the data obtained from different ones of the data sources based on the identified associations. The processing device is also configured to generate, for a given software product based on the determined relationships, a software product model data structure comprising portions of first and second subsets of the data obtained from different data sources. The processing device is further configured to generate, in response to a request received from a data consumer, a visualization of information related to the given software product based on the software product model data structure and role-based access rules for a given role of the data consumer.