One example method includes identifying a source of a performance issue in a virtualized environment. Telemetry data is collected relative to the flow of a request/response in the virtualized environment. The collected telemetry data can be compared to normal data. A probability can be generated for each layer to identify which of the layers is the most likely source of the performance issue. The layers can be prioritized based on their probability. The most likely layer or virtual machine is recommended for analysis to determine the cause of the performance issue.
Techniques are disclosed for edge node data gathering. One example method includes receiving probability distributions from edge nodes; using the probability distributions to identify a set of distribution cliques of the edge nodes; selecting one or more representative edge nodes from each clique; receiving feature data from the edge nodes, the feature data comprising resource information that includes a resource availability and a utilization status of the edge node at a first time, t−1; training a ML-based model using a portion of the feature data; associating the feature data with the corresponding clique for the edge node at the first time; using the probability distributions, cliques, and feature data to obtain episode data for each clique for the first time; and training a ML-based divergence model using a portion of the episode data to update a divergence threshold value for the clique for a second time, t.
Described is a system (and method) that maintains deduplication efficiency when storing data within a clustered data storage environment that implements a global namespace. To provide such a capability, the system may obtain granular data source identifying information from a client system that provides data to be backed-up by a backup component. The data source identifying information may take the form of a placement tag that is associated with the received data. The backup component may then provide such placement tags when providing the backup data to the clustered storage system. The placement tags may then be used to intelligently distribute backup files to particular storage nodes of the clustered storage system to improve deduplication efficiency.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 16/174 - Redundancy elimination performed by the file system
4.
METHOD TO EFFICIENTLY TRANSFER SUPPORT AND SYSTEM LOGS FROM AIR-GAPPED VAULT SYSTEMS TO REPLICATION DATA SOURCES BY RE-UTILIZING THE EXISTING REPLICATION STREAMS
One example method includes, at a replication data source, initiating a replication process that includes transmitting a replication stream to a replication destination vault, and data in the replication stream is transmitted by way of a closed airgap between the replication data source and the replication destination vault, switching, by the replication data source, from a transmit mode to a receive mode, receiving, at the replication data source, a first checksum of a file, and the first checksum and file were created at the replication destination vault, receiving, at the replication data source, the file, calculating, at the replication data source, a second checksum of the file, and when the second checksum matches the first checksum, ending the replication process.
G06F 3/06 - Digital input from, or digital output to, record carriers
5.
METHOD TO EFFICIENTLY TRANSFER SUPPORT AND SYSTEM LOGS FROM AIR-GAPPED VAULT SYSTEMS TO REPLICATION DATA SOURCES BY RE-UTILIZING THE EXISTING REPLICATION STREAMS
One example method includes, at a replication data source, initiating a replication process that includes transmitting a replication stream to a replication destination vault, and data in the replication stream is transmitted by way of a closed airgap between the replication data source and the replication destination vault, switching, by the replication data source, from a transmit mode to a receive mode, receiving, at the replication data source, a first checksum of a file, and the first checksum and file were created at the replication destination vault, receiving, at the replication data source, the file, calculating, at the replication data source, a second checksum of the file, and when the second checksum matches the first checksum, ending the replication process.
Methods and apparatus are provided for real-time anomaly detection over sets of time-series data. One method comprises: obtaining a state-space representation of a plurality of states and transitions between said states based on sets of historical time-series data; obtaining an anomaly detection model trained using a supervised learning technique, wherein the anomaly detection model associates sequences of states in the state-space representation with annotated anomalies in the sets of historical time-series data and assigns a probability to said sequences of states; and, for incoming real-time time-series data, determining a likelihood of a current state belonging to a plurality of possible states in the state-space representation; and determining a probability of incurring said annotated anomalies based on a plurality of likely current state sequences that satisfy a predefined likelihood criteria. Anomalous behavior is optionally distinguished from previously unknown behavior based on a predefined likelihood threshold.
Systems and methods are provided for optimizing GPU memory allocation for high-performance applications such as deep learning (DL) computing. For example, a DL task is executed using GPU resources (GPU device and GPU memory) to process a DL model having functional layers that are processed in a predefined sequence. A current functional layer of the DL model is invoked and processed using the GPU device. In response to the invoking, a data compression operation is performed to compress data of a previous functional layer of the DL model, and store the compressed data in the GPU memory. Responsive to the invoking, compressed data of a next functional layer of the DL model is accessed from the GPU memory and a data decompression operation is performed to decompress the compressed data for subsequent processing of the next functional layer of the DL model by the GPU device.
One example method includes identifying objects that each include one or more segments to be transferred from a source storage tier to a target storage tier, determining a total amount of data to be transferred, using a tiering controller to create worker nodes operable to transfer the segments to the target storage tier, where a number of worker nodes created is based on the amount of data, transferring, from the source storage tier to the target storage tier, only those segments of the objects not already present in the target storage tier, and the transferring of the segments is performed by the worker nodes, and for each of the objects, placing metadata associated with that object in a bucket.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 16/215 - Improving data qualityData cleansing, e.g. de-duplication, removing invalid entries or correcting typographical errors
9.
ZERO-KNOWLEDGE PROTECTION FOR SIDE CHANNELS IN DATA PROTECTION TO THE CLOUD
Masking a data rate of transmitted data is disclosed. As data is transmitted from a production site to a secondary site, the data rate is masked. Masking the data rate can include transmitting at a fixed rate, a random rate, or an adaptive rate. Each mode of data transmission masks or obscures the actual data rate and thus prevents others from gaining information about the data or the data owner from the data transfer rate.
One example method includes performing a filtering process that identifies one or more candidate hosts for scheduling of a pod, wherein the candidacy of a host is determined based in part upon an association rule, generating an overall host score for each of the candidate hosts, and scheduling the pod to one of the candidate hosts based on the overall host score of that candidate host. A host risk score and/or pod risk score may be used in the generating of the overall host score.
G06F 9/455 - EmulationInterpretationSoftware simulation, e.g. virtualisation or emulation of application or operating system execution engines
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
11.
Determining Projected Technology Information Effect
A system can associate interests and responsibilities that correspond to a user account with a tag, based on search data originated by the user account. The system can determine content to send to the user account based on the tag. The system can determine that an offering is first offered after sending the content to the user account. The system can determine that the user account has purchased the offering. The system can determine that a portion of a commission associated with the user account purchasing the offering is credited to sending the content to the user account based on the tag. The system can store an indication that the portion of the commission associated with the user account purchasing the offering is credited to sending the content to the user account based on the tag.
D2(N−n), wherein N represents a total number of cores; calculate the number of cores for the first domain using a quadratic equation generated from the parallel fraction and performance value in each domain; and execute the application in each domain using the number of cores for each domain.
One example method includes receiving, at an IO journal, a new entry that identifies a respective disk location L, and data X written at that disk location L, and determining whether a location specified in an oldest entry of the IO journal is specified in any other entries in the IO journal. When the location specified in the oldest entry is not specified in any other entries in the IO journal, adding the new entry to the IO journal, and augmenting the new entry with undo data. Or, when the location specified in the oldest entry is specified in at least one other entry in the journal, setting data specified in the oldest entry as undo data for the next entry that identifies that location, and adding the new entry to the IO journal, and deleting the oldest entry from the IO journal.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 9/455 - EmulationInterpretationSoftware simulation, e.g. virtualisation or emulation of application or operating system execution engines
14.
Systems and methods for temporary access with adaptive trust levels for authentication and authorization
One example method includes providing temporary access to a computing system and to providing temporary access as a service. The features of a temporary access can be defined by an entity and a user may be able to obtain a token that includes these features, which may be embedded in the token as claims. The user's access is then controlled in accordance with the embedded claims. The temporary access as a service can be federated. The token may include trust levels and tolerance limits. Further, aspects of the temporary access can be monitored and/or changed. Adjustments to trust levels can be automated or manually performed. Further trust for specific users can be gained or lost over time based on at least previous accesses.
One example method includes receiving data from a container data collector (CDC), and the data concerns a container, analyzing the data and, based on the analyzing, identifying a security tool needed to scan the container, drawing the security tool from a knowledge lake, executing the security tool to perform a vulnerability scan of the container, based on the executing of the security tool, generating and analyzing a report concerning the vulnerability scan, and transmitting the report, and results of the analyzing, to an alert and action stage.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
One example method includes receiving, by a backup appliance, a request concerning a dataset, performing, by the backup appliance, an inquiry to determine if end-to-end encryption is enabled for a volume of a target storage array, receiving, by the backup appliance, confirmation from the storage array that end-to-end encryption is enabled for the volume, and based on the confirmation that end-to-end encryption is enabled for the volume, storing the dataset in the volume without performing encryption, compression, or deduplication, of the dataset prior to storage of the dataset in the volume.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
A method, computer program product, and computing system for sensing a failure within a system within a computing device. The system may include a cache memory system and a vaulted memory comprising a random access memory (RAM) having a plurality of independent persistent areas. A primary node and secondary node may be provided. The primary node may occupy a first independent persistent area of the RAM of the vaulted memory. The secondary node may occupy a second independent persistent area of the RAM of the vaulted memory. Data within the vaulted memory may be written to a persistent media using an iterator. The data may include at least one dirty page. Writing data within the vaulted memory to the persistent media may include flushing the at least one dirty page to the persistent media.
One example method includes performing delta operations to protect data. A delta queue is provided that allows a replica volume to be rolled forwards and backwards in time. When rolling the replica volume forward, an undo delta is created such that the replica volume can be moved backwards after being moved forward. When rolling the replica volume backwards, a forward delta is created such that the replica volume can be moved forwards after being moved backwards.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
19.
HANDLING CONFIGURATION DRIFT IN BACKUP STORAGE SYSTEMS
Embodiments for handling configuration drift in a data storage system having a plurality of storage nodes. A configuration drift manager system defines a golden configuration dataset for the data storage system, obtains a current configuration dataset of each storage node of the plurality of storage nodes, each of the golden and configuration datasets comprising a plurality of sentences defining a node configuration parameter; determines a distance between each sentence of the golden configuration dataset with each other sentence of the current configuration datasets for each of the plurality of storage nodes; ranks each node based on a distance of its sentences with the golden configuration dataset, and triggers an action on a corresponding node based on its respective ranking.
One example method includes performing delta operations to protect data. During a delta operation, a primary map and a secondary map are processed using bit logic. The bit logic determines how to handle data stored at a location on the volume associated with an entry in the primary map and included in the current delta operation when a new write for the same location is received as the corresponding entry in the primary map is processed.
G06F 3/06 - Digital input from, or digital output to, record carriers
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
21.
EFFICIENT CLEANUP/DEFRAGMENTATION MECHANISM FOR EXPIRED RETENTION LOCKED (COMPLIANCE & GOVERNANCE) SEGMENTS IN DEDUPED CLOUD OBJECTS
One example method includes identifying a cloud object as a potential candidate for defragmentation, evaluating the cloud object to determine what portion of segments of the cloud object are expired, when the portion of expired segments meets or exceeds a threshold, segregating the expired segments and unexpired segments of the cloud object, creating a first new cloud object that includes only unexpired segments, creating a second new cloud object that includes only expired segments, and deleting the cloud object from storage.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 3/06 - Digital input from, or digital output to, record carriers
G06F 16/174 - Redundancy elimination performed by the file system
Systems and methods for allocating resources are disclosed. Resources such as streams are allocated using a stream credit system. Credits are issued to the clients in a manner that ensure the system is operating in a safe allocation state. The credits can be used not only to allocate resources but also to throttle clients where necessary. Credits can be granted fully, partially, and in a number greater than a request. Zero or negative credits can also be issued to throttle clients.
Data protection operations including replication operations are disclosed. Virtual machines, applications, and/or application data are replicated according to at least one strategy. The replication strategy can improve performance of the recovery operation.
A data protection system includes a splitter configured to reduce latencies when splitting writes in a computing environment. The splitter captures a write and adds metadata to augment the write with virtual related information. The augmented data is provided to a smartNIC while the write is then processed in the IO stack. The smartNIC may have a volume only visible to the splitter. The smartNIC also includes processing power that allows data protection operations to be performed at the smartNIC rather than with the processing resources of the host.
One example method includes, in a data buffer that includes one or more words and whitespaces, calculating a hash value of data in a window that is movable within the data buffer, comparing the hash value to a mask, and when the hash value matches the mask, identifying a position of the window in the data buffer as a chunk anchor position, searching for a whitespace nearest the chunk anchor position, and designating an offset of the whitespace as a segment boundary.
One example method includes receiving at a dedupe system, from a client, a request that comprises a set of fingerprints, where each fingerprint in the set corresponds to a particular data segment, filtering, at the dedupe system, the set of fingerprints into a set of unique fingerprints and a set of non-unique fingerprints, reading, at the dedupe system, from a container where copies of the non-unique fingerprints are stored, an additional set of non-unique fingerprints, sending, from the dedupe system to the client, a single response that comprises both the set of unique fingerprints and the additional set of non-unique fingerprints, and receiving from the client, at the dedupe system, data segments that respectively correspond to the unique fingerprints in the set of unique fingerprints, but no data segments corresponding to the non-unique fingerprints in the set of non-unique fingerprints are received by the dedupe system from the client.
Techniques for intelligently routing IO to a storage class memory (SCM) namespace are disclosed. A configuration for a namespace is determined, where the configuration indicates a type of IO that the namespace is structured to handle. Details about the configuration of the namespace are stored in a repository. A forwarding rule is generated based on the namespace's stored configuration. When incoming IO having attributes similar to that type is received, implementation of the forwarding rule causes the incoming IO to be directed to the namespace. Attributes of a particular incoming IO are determined. As a result of the attributes satisfying a similarity threshold relative to the type, the forwarding rule is implemented such that the particular incoming IO is directed to the namespace.
Mapping information identifies ranges of files, a set of front-end microservices, and assignments of the ranges to the front-end microservices. Each front-end microservice is thereby responsible for a range of files. The files are represented by segment trees and the front-end microservices handle operations involving an upper-level of the segment trees. A file system request on a file is directed to a particular front-end microservice that is responsible for handling a particular range of files within which the file falls according to the mapping information. An indication is received from a container orchestration service that a number of front-end microservices has changed. The mapping information is updated based on the change in the number of front-end microservices.
One example method includes performing a data management transaction, such as a data read operation, a data write operation, or a data delete operation, generating transaction metadata relating to the data management transaction, transmitting the transaction metadata to a blockchain network, and receiving, from the blockchain network, confirmation that the transaction metadata has been stored in a distributed ledger associated with the blockchain network.
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
H04L 9/06 - Arrangements for secret or secure communicationsNetwork security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
30.
AUTOMATICALLY CREATING DATA PROTECTION ROLES USING ANONYMIZED ANALYTICS
Selecting user access policies for a new system, by collecting user, access policy, and resource metadata for a plurality of other users storing data dictated by one or more access restriction policies. The collected metadata is anonymized with respect to personal identifying information, and is stored in an anonymized analytics database. The system receives specific user, access policy and resource metadata for the new system from a specific user, and matches the received specific user metadata to the collected metadata to identify an optimum access policy of the one or more access policies based on the assets and access restriction requirements of the new system. The new system is then configured with the identified optimum access policy as an initial configuration.
A method is used in managing storage space in storage systems. Storage space reserved by a storage object from a set of storage tiers is evaluated. A data storage system includes the first and second storage tiers configured such that performance characteristics associated with the first storage tier is different from the second storage tier. Based on the evaluation, storage space available and consumed in each storage tier of the set of storage tiers is determined.
Systems and methods for backing up data are provided. Data objects or blocks of data can be encrypted with individualized keys. The keys are generated from the unencrypted data objects or blocks. The encrypted data objects or blocks and fingerprints of the encrypted data objects or blocks can be uploaded to a datacenter. Even though the data objects or blocks are encrypted, deduplication can be performed by the datacenter or before the data object is uploaded to the datacenter. In addition, access can be controlled by encrypting the key used to encrypt the data object with access keys to generate one or more access codes. The key to decrypt the encrypted data object is obtained by decrypting the access code.
A method is used in managing storage operations in storage systems. Based on a set of criteria, an amount of storage resources required to perform a storage operation is determined. The storage operation is directed to fault tolerant storage devices. The amount of storage resources is allocated prior to start performing the storage operation. The storage operation is performed by using the allocated storage resources.
Described is a system (and method) that provides a mechanism for guarding against cyber-attacks including ransomware, malware, and various other types of malicious attacks. The mechanism includes providing an isolated storage recovery account within a cloud-based storage infrastructure. The isolated storage recovery account secures data even in instances where credentials for a subscriber to a cloud-based service or the cloud-based provider itself is compromised. In order to ensure that data is still protected even when access credentials may be compromised (e.g. by a disgruntled employee), the mechanism requires a joint coordination between both the provider and the subscriber. The joint coordination may be mandated by the use of a particular multiple encryption technique for credentials that are required to access the isolated storage recovery account.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
H04L 9/06 - Arrangements for secret or secure communicationsNetwork security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
H04L 9/30 - Public key, i.e. encryption algorithm being computationally infeasible to invert and users' encryption keys not requiring secrecy
35.
Reporting of space savings due to pattern matching in storage systems
Techniques are provided for reporting space savings due to pattern matching in storage systems. For example, in one embodiment, an exemplary method comprises, when a given allocation unit in a storage system matches one or more predefined patterns, (i) setting a corresponding pattern flag for the given allocation unit, and (ii) incrementing at least one pattern counter; generating at least one snapshot of at least a portion of a file comprising the given allocation unit; and determining a range of data reduction attributed to pattern matching based on said at least one pattern counter, wherein one extreme of said range of data reduction attributed to pattern matching excludes said one or more predefined patterns in said at least one snapshot.
Described are techniques for modeling processing performed in a data storage system. Inputs received may include a plurality of workloads each denoting a workload for one of a plurality of storage groups, a plurality of service level objectives each denoting a target level of performance for one of the plurality of storage groups, a plurality of capacities each denoting a storage capacity of one of a plurality of storage tiers, and a plurality of maximum workloads each denoting a maximum workload capability of one of the plurality of storage tiers. Using the inputs, placement of data of the plurality of storage groups on the plurality of storage tiers may be modeled. Output(s) may be generated based on the modeling where the output(s) may include an amount of each of the plurality of storage tiers allocated by modeling to each of the plurality of storage groups.
An integrated computing system configuration system includes a computing system that executes an engine to receive component specifications for each of one or more components supplied by a plurality of suppliers, and receive user input for selecting a subset of the components to be implemented in a customized integrated computing system by generating a base integrated computing system configuration that comprises the component specifications of the subset of the components. The engine may then apply one or more rules to at least one of the component specifications to verify the subset of components, the rule specifying an architectural standard level to be provided by the at least one component, and display the results of the verification on a display.
A method is used in managing truncation of files of file systems. A request is received to delete a portion of a file of a file system. The file system includes a plurality of files. Metadata of the file is evaluated for determining a number of file system blocks associated with the portion of the file that are available for de-allocation. Storage space associated with the file system blocks is reported as available storage space to a user of the file.
Secure credentials (e.g., Diffie Helman (DH) key pairs) may be generated independently of requests to establish communication channels between storage system ports (SSPs) and remote ports, such that secure credentials are pre-generated relative to the requests for which they are utilized to establish secure communication channels. For example, DH key pairs may be pre-generated, and each DH key pair stored in an entry of a DH key table. The number of DH keys to generate and store may be determined based on user input and/or the number of potential communication channels for the storage system. In response to a request to establish a communication channel, an IKE session may be executed, during which a pre-generated DH key pair may be obtained from the DH key table, from which symmetric for secure communication between the SSP and the remote port may be derived.
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
H04L 9/14 - Arrangements for secret or secure communicationsNetwork security protocols using a plurality of keys or algorithms
One example method includes receiving from a node, in an HSAN that includes multiple nodes, an ADD_DATA request to add an entry to a distributed ledger of the HSAN, the request comprising a user ID that identifies the node, a hash of a data segment, and a storage location of the data segment at the node, performing a challenge-and-response process with the node to verify that the node has a copy of the data that was the subject of the entry, making a determination that a replication factor X has not been met, and adding the entry to the distributed ledger upon successful conclusion of the challenge-and-response process.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
H04L 9/00 - Arrangements for secret or secure communicationsNetwork security protocols
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
A System, Computer Program Product, and computer-executable method for managing a virtual network, the System, Computer Program Product, and computer-executable method comprising receiving a request to create the virtual network, creating a policy based on a catalog virtual network resources, and implementing the virtual network based on the created policy.
A method, computer program product, and computing system for processing historical input/output (IO) performance data associated with one or more storage objects of a storage system. A smoothing model may be applied on at least a portion of the historical IO performance data to generate forecast IO performance data. The forecast IO performance data may be compared to observed IO performance data to generate one or more performance differentials. A normal IO performance range may be generated based upon, at least in part, the one or more performance differentials. One or more IO performance anomalies may be detected based upon, at least in part, the normal IO performance range.
A method, computer program product, and computing system for determining that one non-volatile random access memory (NVRAM) drive of a pair of NVRAM drives of a storage system is offline, thus defining an offline NVRAM drive and an online NVRAM drive. A virtual disk may be generated on a plurality of solid-state disks (SSDs) of the storage system. The contents of the online NVRAM drive may be copied to the virtual disk. The virtual disk may be exposed to the storage system as a representation of the offline NVRAM drive.
Embodiments for processing authentication tokens in a system with multiple Representational State Transfer (REST) servers and clients. An intelligence process for multiple processes or multiple REST clients in an OS effectively communicates with multiple REST servers and proactively manages each server's authentication token. A shared library is loaded into a process that uses shared memory to manage the generation and expiry of a token and to communicate with a supported REST server through a single function call. The REST Authentication token will be generated for each REST server and stored in the shared memory which will be reused across multiple processes that use the library. The REST token will be validated for each function call.
A method, computer program product, and computing system for processing, using a storage node, one or more updates to one or more metadata pages of a multi-node storage system. The one or more updates may be stored in one or more data containers in a cache memory system of the storage node, thus defining an active working set of data containers. Flushing ownership for each data container of the active working set may be assigned to one of the storage nodes based upon an assigned flushing ownership for each data container of a frozen working set and a number of updates within the frozen working set processed by each storage node, thus defining an assigned flushing storage node for each data container of the active working set. The one or more updates may be flushed, using the assigned flushing storage node, to a storage array.
A method, computer program product, and computing system for defining one or more user data portions and at least two reserved portions of a solid-state drive (SSD). An operating mode of the SSD may be determined. One or more of the at least two reserved portions of the SSD may be utilized based upon, at least in part, the operating mode of the SSD.
A method, computer program product, and computing system for dividing a total IO flow rate limit between a plurality of storage nodes of a multi-node storage system. A total desired IO flow rate may be determined. Each storage node of the plurality of storage nodes may be queried for a desired IO flow rate, thus defining a plurality of desired IO flow rates. An updated IO flow rate limit may be defined, for each storage node, an updated IO flow rate limit based upon, at least in part, the total IO flow rate limit and the plurality of desired IO flow rates. One or more IO requests may be processed on the plurality of storage nodes based upon, at least in part, the updated IO flow rate limit defined for each storage node and the total desired IO flow rate.
A method, computer program product, and computing system for processing historical input/output (IO) performance data associated with one or more storage objects of a storage system. A plurality of IO modeling systems may be trained using the historical IO performance data. Modeling performance information may be determined for the plurality of IO modeling systems across the historical IO performance data. A forecast score may be determined for each IO modeling system based on the modeling performance information for the plurality of IO modeling systems. A subset of the plurality of IO modeling systems may be selected based upon the forecast score for each IO modeling system. The at least one IO modeling system may be trained using the historical IO performance data. IO performance data may be forecasted using the at least one trained IO modeling system from the subset of the plurality of IO modeling systems.
A method, computer program product, and computing system for establishing a first file system protocol connection between a first storage system and a second storage system. A security descriptor of one or more electronic files on the first storage system may be queried for security-related information using the first file system protocol connection. Security-related information associated with a first file system protocol and security-related information associated with a second file system protocol may be migrated from the first storage system to the second storage system.
G06F 16/11 - File system administration, e.g. details of archiving or snapshots
50.
Method to efficiently transfer support and system logs from air-gapped vault systems to replication data sources by re-utilizing the existing replication streams
One example method includes, at a replication data source, initiating a replication process that includes transmitting a replication stream to a replication destination vault, and data in the replication stream is transmitted by way of a closed airgap between the replication data source and the replication destination vault, switching, by the replication data source, from a transmit mode to a receive mode, receiving, at the replication data source, a first checksum of a file, and the first checksum and file were created at the replication destination vault, receiving, at the replication data source, the file, calculating, at the replication data source, a second checksum of the file, and when the second checksum matches the first checksum, ending the replication process.
One example method includes optimizing client-side deduplication. When backing up a client, a cadence and a change log resolution are determined. These values are evaluated alone or in combination with respect to various thresholds. Client-side deduplication is enabled or disabled based on whether any one or more of the thresholds are satisfied.
G06F 16/00 - Information retrievalDatabase structures thereforFile system structures therefor
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 16/174 - Redundancy elimination performed by the file system
52.
Processing out of order writes in a log structured file system for improved garbage collection
Improving performance of garbage collection (GC) processes in a deduplicated file system having a layered processing architecture that maintains a log structured file system storing data and metadata in an append-only log arranged as a monotonically increasing log data structure of a plurality of data blocks wherein a head of the log increases in chronological order and no allocated data block is overwritten. The storage layer reserves a set of data block IDs within the log specifically for the garbage collection process, and assigns data blocks from the reserved set to GC I/O processes requiring acknowledgment in a possible out-of-order manner relative to an order of data blocks in the log. It strictly imposes using in-order I/O acknowledgement for other non-GC processes using the storage layer, where these processes may be deduplication backup processes using a segment store layer at the same protocol level as the GC layer.
Techniques are provided for intrusion detection on a computer system. In an example, a computer host device is configured to access data storage of the computer system via a communications network. It can be determined that the computer host device is behaving anomalously because a first current access by the computer host device to the data storage deviates from a second expected access by the computer host device to the data storage by more than a predefined amount. Then, in response to determining that the computer host device is behaving anomalously, the computer system can mitigate against the computer host device behaving anomalously.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
G06F 3/06 - Digital input from, or digital output to, record carriers
G06F 21/55 - Detecting local intrusion or implementing counter-measures
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
Methods, systems, and computer readable mediums for generating a curated user interface (UI) marker are disclosed. According to one exemplary embodiment, a method includes receiving information for generating a curated UI marker associated with a converged infrastructure management application, wherein the curated UI marker includes a hyperlink to locally stored information associated with the converged infrastructure management application. The method also includes generating, using the information, the curated UI marker associated with the converged infrastructure management application.
A platform is provided for uniform parsing of configuration files for multiple product types. One method comprises obtaining, by a parser of a given product type, a given request from a message queue based on a metadata message of an incoming configuration file from a remote product of a given product type, wherein the message queue stores metadata messages for a plurality of product types; extracting information from the incoming configuration file based on product-specific business logic obtained from a table store comprising tables for the plurality of product types, wherein the business logic provides a mapping between information extracted from the incoming configuration file and destination database tables; and storing the contents in the destination database tables of a product-specific predefined database schema.
Processing of continuously generated data using a rolling transaction procedure is described. For instance, a system can process a data stream comprising a first segment and a second segment. A transaction associated with the data stream can be initiated and in response to the transaction being initiated, a first transaction segment for the first segment and a second transaction segment for the second segment are generated. Further, a scaling event that modifies the second segment into a third segment and a fourth segment can be detected, and a data stream transaction procedure is executed to end the transaction.
Described is a system for detecting corruption in a deduplicated object storage system accessible by one or more microservices while minimizing costly read operations on objects. A similarity group verification path is selected by a controller module based upon detection of an object storage memory size condition. The similarity group verification path includes controller phases to verify whether objects have been corrupted without having to incur costly read operations.
A method, computer program product, and computing system for receiving a plurality of physical layer blocks (PLBs). A subset of PLBs may be selected from the plurality of PLBs for combining into a combined PLB based upon, at least in part, a utilization of each PLB of the plurality of PLBs, an average compression per active virtual, and a number of free PLBs generated when combining into the combined PLB. One or more PLBs of the subset of PLBs may be compressed based upon, at least in part, the average compression per active virtual. The one or more PLBs of the subset of PLBs may be combined into the combined PLB.
A method, computer program product, and computing system for defining a normal IO write mode for writing data to a storage system, the normal IO writing mode including: writing the data to a cache memory system, writing the data to a journal, in response to writing the data to the journal, sending an acknowledgment signal to a host device, and writing the data from the cache memory system to a storage array. A request may be received to enter a testing IO write mode. In response to receiving the request, the data may be written to the cache memory system. The writing of the data to the journal may be bypassed. The acknowledgment signal may be sent to the host device in response to writing the data to the cache memory system. The data may be written from the cache memory system to the storage array.
Embodiments for retention locking a deduplicated file stored in cloud storage by defining object metadata for each object of the file, and comprising a lock count and a retention time based on an expiry date of the lock, with each object having segments, the object metadata further having a respective expiry date and lock count for each segment, where at least some segments are shared among two or more files. Also updating the lock count and retention time for all segments of the file being locked; and if the object is not already locked, locking the object using a retention lock defining a retention time and updating the object metadata with a new lock count and the retention time, otherwise incrementing the lock count and updating the retention time for the expiry date if expiry date of a previous lock is older than a current expiry date.
G06F 7/00 - Methods or arrangements for processing data by operating upon the order or content of the data handled
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 16/16 - File or folder operations, e.g. details of user interfaces specifically adapted to file systems
G06F 16/17 - Details of further file system functions
G06F 16/174 - Redundancy elimination performed by the file system
G06F 16/176 - Support for shared access to filesFile sharing support
G06F 17/00 - Digital computing or data processing equipment or methods, specially adapted for specific functions
61.
Method, computer program product, and computing system for defining a normal IO write mode and handling requests to enter a testing IO write mode
A method, computer program product, and computing system for defining a normal IO write mode for writing data to a storage system including: writing the data to a cache memory system of a first storage node, writing the data to a journal of the first storage node, sending a notification concerning the data to a second storage node, writing one or more metadata entries concerning the data to a journal of the second storage node, sending an acknowledgment signal to the host device, and writing the data to the storage array. A request may be received to enter a testing IO write mode. In response to receiving the request, the data may be written to the cache memory system. The writing of the data to the journal may be bypassed. The acknowledgment signal may be sent to the host device. The data may be written to the storage array.
A request is received from a user at a client to access a file of a set of files backed up to a backup server. Upon verifying a password provided by the user, the client is issued another request for authentication. A first data structure is received responsive to the request. The first data structure is generated using identifiers corresponding to a set of files at the client of which at least some presumably have been backed up to the server. A second data structure is generated. The second data structure is generated using identifiers corresponding to the set of files backed up to the server. The first and second data structures are compared to assess a degree of similarity between the files at the client and the files backed up to the backup server. The user is denied access when the degree of similarity is below a threshold.
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 16/13 - File access structures, e.g. distributed indices
The described technology is generally directed towards managing data retention policy for stream data stored in a streaming storage system. When a request to truncate a data stream from a certain position (e.g., from a request-specified stream cut) is received, an evaluation is made to determine whether the requested position is within a data retention period as specified by data retention policy. If any data prior to the stream cut position (corresponding to a stream cut time) is within the data retention period, the truncation request is blocked. Otherwise truncation from the stream cut point is allowed to proceed/is performed. Also described is handling automated (e.g., sized based) stream truncation requests with respect to data retention.
One example method includes telemetry based state transition and prediction. Telemetry data is used to generate a transition matrix. The transition matrix is used to predict a state transition for a system or an application. A log level is predictively adjusted based on the transition matrix. The telemetry data is thus adaptively collected based on predicted transitions.
Techniques for handling data with different lifetime characteristics in stream-aware data storage systems. The data storage systems can include a file system that has a log-based architecture design, and can employ one or more solid state drives (SSDs) that provide log-based data storage, which can include a data log divided into a series of storage segments. The techniques can be employed in the data storage systems to control the placement of data in the respective segments of the data log based at least on the lifetime of the data, significantly reducing the processing overhead associated with performing garbage collection functions within the SSDs.
Embodiments of the present disclosure relate to a method for storage management, an electronic device, and a computer program product. According to an example implementation of the present disclosure, a method for storage management is provided, which comprises receiving an access request for target metadata from a user at a node among a plurality of nodes included in a data protection system, wherein the access request includes an identification of the target metadata; based on the identification, acquiring target access information corresponding to the identification from a set of access information for the user, wherein the target access information records information related to access to the target metadata; and if the target access information is acquired, determining the target metadata based on the target access information.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
67.
Generating customized documentation for applications
Generating customized documentation is disclosed, including: receiving a set of meta information describing an aspect of an application; and generating a document to provide guidance specific to the application based at least in part on at least a subset of the set of meta information.
In general, embodiments relate to a method for generating synthetic full backups, the method comprising: performing a verification that a previous backup of source data stored in a data domain is a failed synthetic full backup, obtaining based on the verification a latest snapshot of the source data, obtaining based on the verification a prior snapshot of the source data making a determination, using a copy list that a first portion of the data items in the copy list exists in the previous backup and a second portion of the data items does not exist in the previous backup, and performing based on the determination a copy operation to copy the second portion of the data items to the data domain to obtain a synthetic full backup.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
69.
Garbage collection integrated with physical file verification
System generates data structure based on unique identifiers of objects in storages and sets indicators in positions corresponding to hashes of unique identifiers of objects. The system copies active objects from one storage to another, if number of active objects in storage does not satisfy threshold, and resets indicators in positions in data structure corresponding to hashes of unique identifiers of active objects copied to the other storage. The system generates another data structure based on unique identifiers created while generating data structure, positions in other data structure corresponding to hashes of the unique identifiers. System sets indicators in positions in the other data structure corresponding to hashes of unique identifiers of data objects in active storages while generating data structure. System resets indicators in positions in data structure corresponding to hashes of the unique identifiers corresponding to indicators set in positions of the other data structure.
Tracking changes to a document by defining a document record having a unique document record and comprising an index and a file name of the document, and defining a backup record for the document in a series of backups, which includes a timestamp for each backup, and a bitmask for the document. The bitmask has a single bit position for each document in the container which is set to a first binary value to indicate that the corresponding document is unchanged and a second binary value to indicate whether the document is changed or deleted. A primary query is received and resolved for the document by analyzing the document record to find the file name. A secondary query using the document record ID is resolved to find all tracked versions of the document, and the results are returned to the user in the form of a version history list.
G06F 16/00 - Information retrievalDatabase structures thereforFile system structures therefor
G06F 9/30 - Arrangements for executing machine instructions, e.g. instruction decode
G06F 16/11 - File system administration, e.g. details of archiving or snapshots
G06F 16/21 - Design, administration or maintenance of databases
G06F 16/2457 - Query processing with adaptation to user needs
G06F 16/2458 - Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
G06F 16/25 - Integrating or interfacing systems involving database management systems
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
G06F 16/38 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
A system can determine timeseries telemetry data of resource utilization of respective data centers of a group of data centers maintained by the system. The system can predict respective hardware requests based on future resource utilization based on the timeseries telemetry data, the hardware requests comprising respective hardware requests at respective data centers of the group of data centers. The system can predict respective future times at which the respective hardware requests will occur. The system can determine respective physical location sources of hardware, respective physical location destinations of hardware, and respective amounts of hardware based on the respective hardware requests and the respective future times. The system can store an indication of the respective physical location sources of hardware, respective physical location destinations of hardware, and respective amounts of hardware.
G06F 11/34 - Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
H04L 41/5009 - Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]
Destination namespace and file copying: a namespace service receives communication of namespace update for file from file's source. and communicates namespace update for file to an access object service identified for file. The access object service receives communication of fingerprints stream, corresponding to file's segments, from file's source, and identifies sequential fingerprints in fingerprints stream as fingerprints group. The access object service identifies group identifier for fingerprints group, and communicates fingerprints group to a deduplication service associated with group identifier range including group identifier. The deduplication service identifies fingerprints in fingerprints group which are missing from fingerprint storage, and communicates identified fingerprints to the access object service, which communicates request for file's segments, corresponding to identified fingerprints, to file's source. The deduplication service receives communication of requested segments from file's source, and stores requested segments. The access object service stores namespace update for file in distributed namespace data structure.
Embodiments of the present disclosure include receiving one or more input/output (IO) requests at a storage array from a host device. Furthermore, the IO requests can include at least one data replication and recovery operation. In addition, the host device's connectivity access to a recovery storage array can be determined. Data replication and recovery operations can be performed based on the host device's connectivity to the recovery storage array.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 1/12 - Synchronisation of different clock signals
Aspects of the present disclosure relate to enabling storage array-based remote replication from containerized applications operating on one or more node clusters. In embodiments, a host executing one or more operations from a node cluster is provided an interface (e.g., an application programming interface (API)) to a storage array. Additionally, the host can be delivered resources to manage and monitor the storage array to perform one or more data replication services directly from the node cluster and via the interface. Further, data replications services are triggered in response to instructions issued by the host directly from the node cluster and via the interface.
G06F 3/06 - Digital input from, or digital output to, record carriers
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computerOutput arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
75.
BAYESIAN ADAPTABLE DATA GATHERING FOR EDGE NODE PERFORMANCE PREDICTION
One example method includes performing, at a central node operable to communicate with edge nodes of an edge computing environment, operations that include signaling the edge nodes to share their respective data distributions to the central node, collecting the data distributions, performing a Bayesian clustering operation with respect to the edge nodes to define clusters that group some of the edge nodes, and one of the edge nodes in each cluster is a representative edge node of that cluster, and sampling data from the representative edge nodes.
One example method includes determining representation bias in a data set. A bias detection engine is trained using a data set that is sufficiently diversified and/or unbiased. Once trained, test data sets can be evaluated by the bias detection engine to determine an amount of representation bias in the test data sets. The representation bias can be visually conveyed to a user and suggestions on how to reduce the representation bias may be provided and/or implemented to reduce the representation bias in the test data set. Suggestions can be implemented by adding or removing data from the test data that will reduce the representation bias.
One example method includes scanning, at a cloud storage site, metadata associated with an object stored at the cloud storage site, fetching, from the metadata, an object creation time for the object, and determining whether the object is out of a minimum storage duration. When the object is out of the minimum storage duration, it is copy-forwarded and then marked for deletion, and when the object is not out of the minimum storage duration, the object is deselected from a list of objects to be copied forward.
One example method includes performing delta operations to protect data. During a delta operation, a primary bitmap and a secondary bitmap are processed using bit logic. The delta generated by the delta operation is transmitted to a receiver. The receiver enqueues the delta into a delta queue configured to allow the replica volume at the target site to be moved to any point in time represented by the deltas in the delta queue.
One example method includes performing delta operations to protect data. A delta queue is provided that allows a replica volume to be rolled forwards and backwards in time. When rolling the replica volume forward, an undo delta is created such that the replica volume can be moved backwards after being moved forward. When rolling the replica volume backwards, a forward delta is created such that the replica volume can be moved forwards after being moved backwards.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
80.
System and method for lockless aborting of input/output (IO) commands
A method, computer program product, and computing system for receiving an input/output (IO) command for processing data within a storage system. An IO command-specific entry may be generated in a register based upon, at least in part, the IO command. An compare-and-swap operation may be performed on the IO command-specific entry to determine an IO command state associated with the IO command. The IO command may be processed based upon, at least in part, the IO command state associated with the IO command.
A method, computer program product, and computer system for implementing a backend service for blocking free processing of physical entities events, including add, remove, update, query. Physical entities blocking delays may be delegated to maintenance tasks, which may run under a single thread with a scheduler and may merge successive pending events.
Systems and methods for generating a unified metadata model, that includes selecting a first source metadata model, copying a first class, from the first source metadata model, to a first modified metadata model using a unified metadata mapping, and after copying the first class, selecting a second source metadata model, copying a second class, from the second source metadata model, to a second modified metadata model using the unified metadata mapping, and creating the unified metadata model using the first modified metadata model and the second modified metadata model.
G06F 16/80 - Information retrievalDatabase structures thereforFile system structures therefor of semi-structured data, e.g. markup language structured data such as SGML, XML or HTML
G06F 16/90 - Details of database functions independent of the retrieved data types
A system can generate a neural network, wherein an output of the neural network indicates whether a first test of a computer code will pass given an input of respective results of whether respective tests, of a group of tests of the computer code, pass, and wherein respective weights of the neural network indicate a correlation from a group of correlations comprising a positive correlation between a respective output of a respective node of the neural network and the output of the neural network, a negative correlation between the respective output and the output, and no correlation between the respective output and the output. The system can apply sets of inputs to the neural network, respective inputs of the sets of inputs identifying whether the respective tests pass or fail. The system can, in response to determining that a first set of inputs of the sets of inputs to the neural network results in a failure output, storing an indication that the first test is dependent on a subset of the respective tests indicated as failing by the first set of inputs.
A system can determine to restore a datacenter that comprises a group of virtualized workloads. The system can determine respective associations between respective virtualized workloads and respective datastores. The system can determine to restore a first virtualized workload of the group of virtualized workloads first. The system can restore a first portion of infrastructure that corresponds to the first virtualized workload first among a group of infrastructure. The system can, after restoring the first portion of infrastructure, restore a first portion of data that corresponds to the first virtualized workload first among a group of data. The system can, after restoring the first portion of data, restore a first portion of a virtualization layer that corresponds to the first virtualized workload first among a group of virtualization layers. The system can, after restoring the first portion of the virtualization layer, restore the first virtualized workload.
A system can maintain a first data center that comprises a virtualized overlay network and virtualized volume identifiers. The system can determine to perform a restore of data of the first data center to a second data center, the data comprising first instances of virtualized workloads. The system can transfer the data to the second data center. The system can configure the second data center with the virtualized overlay network and the virtualized volume identifiers. The system can operate the virtualized workloads on the second data center, the second instances of the virtualized workloads invoking the second instance of the virtualized overlay network and the second instance of the virtualized volume identifiers.
G06F 9/455 - EmulationInterpretationSoftware simulation, e.g. virtualisation or emulation of application or operating system execution engines
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
A system can maintain a first data center that comprises a virtualized overlay network and virtualized volume identifiers, and store data comprising virtualized workloads. The system can determine a service level agreement associated with providing a second data center as a backup to the first data center. The system can, based on the service level agreement, divide, into a first portion of tasks and a second portion of tasks deploying the data to a secondary storage of the second data center, deploying the data to a primary storage of the second data center, and configuring the second data center with the virtualized overlay network and the virtualized volume identifiers. The system can perform the first portion of tasks before determining to restore the first data center to the second data center. The system can perform the second portion of tasks in response to determining to restore the first data center.
A system can maintain a first data center in a first physical location that comprises first compute hardware, and a second data center in a second physical location that comprises second compute hardware. The system can establish an overlay network that spans the first data center and the second data center. The system can establish a group of virtualized volume identifiers that spans the first data center and the second data center, and that virtualizes physical storage volumes. The system can determine whether to process a customer virtualized workload on the first data center or on the second data center to produce a selected location, wherein the customer virtualized workload is configured to be processed on the first data center and to be processed on the second data center. The system can process the customer virtualized workload at the selected location.
A method, computer program product, and computing system for copying a storage protection configuration for one or more storage resources from a first storage array to at least a second storage array in a storage cluster. A communication failure between at least a pair of storage arrays may be detected, thus defining a surviving storage array and at least one failed storage array. The communication failure between the surviving storage array and the at least one failed storage array may be resolved. The storage protection configuration may be synchronized from the surviving storage array to the at least one failed storage array. The storage protection configuration for the one or more storage resources of each storage array of the at least a pair of storage arrays may be arbitrated.
A method, computer program product, and computing system for receiving a selection of one or more secure snapshots to remove from a storage system. A snapshot deletion key may be received from the storage system. The selection of the one or more secure snapshots and the snapshot deletion key may be provided to a storage system support service. A snapshot deletion response may be received from the storage system support service. The snapshot deletion response and the selection of the one or more secure snapshots may be authenticated via the storage system. In response to authenticating the snapshot deletion response and the selection of the one or more secure snapshots, the one or more secure snapshots may be unlocked for deletion.
A method, computer program product, and computer system for identifying, by a computing device, a number of extents needed for a create snapshot operation to create a snapshot. The number of extents may be added to an in-memory cache. The number of extents needed for the create snapshot operation may be allocated from the in-memory cache to execute the create snapshot operation. Freed extents may be added to the in-memory cache based upon, at least in part, executing a delete snapshot operation to delete the snapshot.
A method, computer program product, and computer system for receiving, by a computing device, a snapshot create operation of a volume to create a first snapshot. Existing dirty data of the volume for the first snapshot may be flushed from an in-memory cache. New writes to the volume for the first snapshot may be maintained in the in-memory cache as dirty. A snapshot create operation to the volume may be received to create a second snapshot. The new writes to the volume for the first snapshot may be combined as part of the second snapshot.
A method, computer program product, and computing system for allocating a first number of tokens from a plurality of tokens for processing read IO requests from a read IO queue, thus defining a number of allocated read tokens. A second number of tokens may be allocated from the plurality of tokens for processing write IO requests from a write IO queue, thus defining a number of allocated write tokens. It may be determined that the processing of the write IO requests is throttled. In response to determining that the processing of the write IO requests from the write IO queue is throttled, a maximum allowable number of write tokens may be defined. Additional tokens may be allocated for processing the read IO requests from the read IO queue based upon, at least in part, the maximum allowable number of write tokens and the number of allocated write tokens.
A method, computer program product, and computing system for defining a first flow for one or more processing threads with access to shared data within the storage system. The one or more processing threads may be executed using the first flow. A processing thread reference count may be determined for the one or more processing threads being executed using the first flow. One or more management threads may be executed on the shared data within the storage system based upon, at least in part, the processing thread reference count.
In general, embodiment relate to a method for provisioning a plurality of client application nodes in a distributed system using a management node, the method comprising: creating a file system in a namespace; associating the file system with a scale out volume; mounting the file system on a metadata node in the distributed system, wherein mounting the file system comprises storing a scale out volume record of the scale out volume; storing file system information for the file system in a second file system on the management node, wherein the file system information specifies the file system and the metadata node on which the file system is mounted; wherein storing the file system information triggers distribution of the file system information to at least a portion of a plurality of client application nodes.
A method for storing data, the method comprising receiving, by an offload component in a client application node, a request originating from an application executing in an application container on the client application node, wherein the request is associated with data and wherein the offload component is located in a hardware layer of the client application node, and processing, by the offload component using an advanced data services pipeline, the request by a file system (FS) client and a memory hypervisor module executing in a modified client FS container on the offload component, wherein processing the request results in at least a portion of the data in a location in a storage pool.
A method for storing data, the method comprising receiving, by an offload component in a client application node, an augmented write request originating from an application executing in an application container on the client application node, wherein the augmented write request is associated with data and wherein the offload component is located in a hardware layer of the client application node, and processing, by the offload component, the augmented write request by a file system (FS) client and a memory hypervisor module executing in a modified client FS container on the offload component, wherein processing the request results in at least a portion of the data being written to a location in a storage pool.
In general, embodiments relate to a method for storing data, the method comprising generating, by a memory hypervisor module executing on a client application node, at least one input/output (I/O) request, wherein the at least one I/O request specifies a location in a storage pool and a physical address of the data in a graphics processing unit (GPU) memory in a GPU on the client application node, wherein the location is determined using a data layout, and wherein the physical address is determined using a GPU module and issuing, by the memory hypervisor module, the at least one I/O request to the storage pool, wherein processing the at least one I/O request results in at least a portion of the data being stored at the location.
Embodiments of the present disclosure relate to a method, a system, and a computer program product for streaming. The method includes: acquiring, during transmission of a stream, information indicating resources of a receiver of the stream available for compensating for degradation of a transmission quality of the stream; and determining at least a target transmission quality of the stream based at least on the resources of the receiver and network resources available for transmitting the stream. This solution provides a more flexible adaptive balance mechanism for streaming, and further optimizes utilization of various resources and user experience in streaming.
Methods, devices, and computer program products for authenticating a peripheral device are provided in embodiments of the present disclosure. In one method, a peripheral device sends, to an edge device, a first authentication request for at least the peripheral device to use resources of the edge device, the first authentication request comprising at least a first identifier associated with the peripheral device and location information of the peripheral device. Then, the peripheral device receives an authentication success or failure indication from the edge device. In this way, effective authentication of a peripheral device can be realized with a less complicated authentication process, so that the security of access of the peripheral device to a virtual desktop can be improved while ensuring good user experience.
Embodiments of the present disclosure relate to a computer-implemented method, a device, and a computer program product. The method includes extracting respective themes of a set of documents with release time within a first period; determining respective semantic information of the themes and frequencies of the themes appearing in the set of documents; and determining the number of documents associated with the themes within a second period according to a prediction model and based on the semantic information and frequencies of the themes. The second period is after the first period. Embodiments of the present disclosure can better predict the tendency of the themes appearing in the future based on the semantic information and frequencies of the themes.