Described systems and techniques determine an event graph of a causal chain of events representing a situation within a network, the event graph including event text characterizing at least one event of the causal chain of events. The event graph may then be processed using a large language model that includes at least one topological context adapter that includes a graph adapter and a text adapter, including processing the event graph with the graph adapter and the event text with the text adapter. The at least one topological context adapter may be trained using existing narratives describing past situations, and/or may be trained using worklog data describing past situations and corresponding actions taken to remedy the past situations. Outputs of the graph adapter and the text adapter may be combined to generate a narrative of the situation that explains the causal chain of events and/or instructions to remedy the situation.
Described systems and techniques determine an event graph of a causal chain of events representing a situation within a network, the event graph including event text characterizing at least one event of the causal chain of events. The event graph may then be processed using a large language model that includes at least one topological context adapter that includes a graph adapter and a text adapter, including processing the event graph with the graph adapter and the event text with the text adapter. The at least one topological context adapter may be trained using existing narratives describing past situations, and/or may be trained using worklog data describing past situations and corresponding actions taken to remedy the past situations. Outputs of the graph adapter and the text adapter may be combined to generate a narrative of the situation that explains the causal chain of events and/or instructions to remedy the situation.
A plurality of textual log records characterizing operations occurring within a technology landscape may be received and converted into numerical log record vectors. For a current log record vector and a preceding set of log record vectors of the numerical log record vectors, a similarity series may be computed that includes a similarity measure for each of a set of log record vector pairs, with each log record vector pair including the current log record vector and one of the preceding set of log record vectors. A similarity distribution of the similarity series may be generated, and an anomaly in the operations occurring within the technology landscape may be detected, based on the similarity distribution.
A computer program product is tangibly embodied on a non-transitory computer-readable medium and includes instructions that, when executed by at least one computing device, are configured to cause the at least one computing device to input a situation event graph and a corresponding scenario into a neural network model, where the neural network model includes a plurality of scenarios and historical ticket data, the situation event graph represents a situation, and the corresponding scenario represents a plurality of situations similar to the situation. The neural network model processes the situation event graph and the corresponding scenario to determine a priority of the situation.
A computer program product is tangibly embodied on a non-transitory computer-readable medium and includes instructions that, when executed by at least one computing device, are configured to cause the at least one computing device to input a situation event graph, topology data associated with the situation event graph, and a knowledge graph associated with the situation event graph into a neural network model. The neural network model includes a plurality of scenarios received from a database, where the situation event graph represents a situation and each of the plurality of scenarios represents at least two similar situations. The neural network model processes the situation event graph, the topology data, and the knowledge graph to determine a similarity estimate between the situation event graph and the plurality of scenarios. The situation event graph is identified as a match to one of the plurality of scenarios based on the similarity estimate.
A computer program product is tangibly embodied on a non-transitory computer-readable medium and includes instructions that, when executed by at least one computing device, are configured to cause the at least one computing device to input a situation event graph and a corresponding scenario into a neural network model, where the neural network model includes a plurality of scenarios, the situation event graph represents a situation, and the corresponding scenario represents a plurality of situations similar to the situation. The neural network model processes the situation event graph and the corresponding scenario to determine a causal impact of the situation.
09 - Scientific and electric apparatus and instruments
Goods & Services
Downloadable software that provides real-time, integrated service management of other computer software, information systems, computer hardware, computer networks, and information databases
8.
OPTIMIZATION OF ATTRIBUTE ACCESS IN PROGRAMMING FUNCTIONS
Systems and techniques for optimizing attribute accesses include receiving a first data structure, the first data structure including a first sequence of statements representing programming functions having an input and an output. The first sequence of statements is parsed to collect attribute accesses defined in the first sequence of statements. The first data structure and the first sequence of statements defining the attribute accesses are transformed to a second data structure including a second sequence of statements representing the programming functions having the input and the output, where the second sequence of statements defines a smaller number of the attribute accesses than the first sequence of statements. The second data structure is output, where the second data structure including the second sequence of statements generates a same output result as the first data structure including the second sequence of statements when executed by the at least one computing device.
42 - Scientific, technological and industrial services, research and design
Goods & Services
Software as a service (SAAS) services featuring software that provides real-time, integrated service management of other computer software, information systems, computer hardware, computer networks, and information databases
10.
SMART PATCH RISK PREDICTION AND VALIDATION FOR LARGE SCALE DISTRIBUTED INFRASTRUCTURE
Systems and techniques for implementing a change to a plurality of devices in a computing infrastructure include generating a risk prediction model, where the risk prediction model is trained using a combination of supervised learning and unsupervised learning and identifying, using the risk prediction model, a first set of devices from the plurality of devices having a low risk of failure due to implementing the change and a second set of devices from the plurality of devices having a high risk of failure due to implementing the change. A schedule is automatically generated for implementing the change to the first set of devices. The change is implemented on a portion of the first set of devices according to the schedule. The risk prediction model is updated using data obtained from implementing the change on the portion of the first set of devices.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
A plurality of log records characterizing operations occurring within a technology landscape may be received. The plurality of log records may be clustered into at least a first cluster of log records and a second cluster of log records, using at least one similarity algorithm. A first dissimilar subset of log records within the first cluster of log records, and a second dissimilar subset of log records within the second cluster of log record may be identified, using the at least one similarity algorithm. At least one machine learning model may be trained to process new log records characterizing the operations occurring within the technology landscape, using the first dissimilar subset and the second dissimilar subset.
An incident ticket having a worklog field for a resolution field and a worklog providing a history of actions taken during attempts to resolve an incident may be received. The incident ticket may be processed using a domain-specific machine learning model trained using training data that includes a plurality of resolved incident tickets, to thereby generate at least one resolution statement. Source data used by the domain-specific machine learning model in providing the at least one resolution statement may be determined, the source data including one of the worklog and the training data. A hallucination score may be assigned to the at least one resolution statement, based on the source data, to identify hallucinated content within the at least one resolution statement. The at least one resolution statement may be modified to remove the hallucinated content and thereby obtain a resolution for inclusion in the resolution field.
Described systems and techniques determine causal associations between events that occur within an information technology landscape. Individual situations that are likely to represent active occurrences requiring a response may be identified as causal event clusters, without requiring manual tuning to determine cluster boundaries. Consequently, it is possible to identify root causes, analyze effects, predict future events, and prevent undesired outcomes, even in complicated, dispersed, interconnected systems.
H04L 41/0631 - Management of faults, events, alarms or notifications using root cause analysisManagement of faults, events, alarms or notifications using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
H04L 41/12 - Discovery or management of network topologies
A plurality of resolved incident tickets may each include a worklog providing a history of actions taken during attempts to resolve a corresponding resolved incident and a resolution having at least one resolution statement. An iterative processing of the plurality of resolved incident tickets may include processing each resolution statement of the resolution with at least one domain-specific statement classifier specific to the incident domain to either discard or retain a classified resolution statement; processing each retained classified resolution statement in conjunction with the worklog to determine whether to discard or retain the resolved incident; providing an updated resolution for the resolved incident when the resolved incident is retained, and adding the resolved incident with the updated resolution to the processed incident tickets. Then, at least one machine learning model may be trained to process a new incident ticket, using the processed incident tickets.
H04L 41/5074 - Handling of user complaints or trouble tickets
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
15.
APPLICATION STATE PREDICTION USING COMPONENT STATE
Described systems and techniques enable prediction of a state of an application at a future time, with high levels of accuracy and specificity. Accordingly, operators may be provided with sufficient warning to avert poor user experiences. Unsupervised machine learning techniques may be used to characterize current states of applications and underlying components in a standardized manner. The resulting data effectively provides labelled training data that may then be used by supervised machine learning algorithms to build state prediction models. Resulting state prediction models may then be deployed and used to predict an application state of an application at a specified future time.
Systems and techniques for identifying performance issues and recommending actions during design-time application development include receiving a design-time user interface (UI) having multiple fields associated with data from a database, where the multiple fields including one or more types of fields. In response to receiving a trigger, the systems and techniques iterate through the multiple fields in the design-time UI by applying one or more rules related to the types of fields and cardinality of the data from the database. One or more recommendations are generated for one or more of the fields based on the applied rules to the multiple fields and the recommendations are output to a display. The systems and techniques may include changing the design-time UI without user input using the recommendations.
Described techniques determine performance metric values of a performance metric characterizing a performance of a system resource of an information technology (IT) system, and determine driver metric values of a driver metric characterizing an occurrence of an event that is at least partially external to the system resource. A correlation analysis may confirm a potential correlation between the performance metric values and the driver metric values as a correlation. A graph relating the performance metric to the driver metric may be generated. A plurality of extrapolation algorithms may be trained to obtain a plurality of trained extrapolation algorithms using a first subset of data points of the graph, and the plurality of trained extrapolation algorithms may be validated using a second subset of data points of the graph. A driver metric threshold corresponding to the performance metric threshold may be determined using a validated extrapolation algorithm.
Described systems and techniques perform causal chain extraction for an investigated event in a system, using a neural network trained to represent a temporalsequence of events within the system. Such neural networks, by themselves, may be successful in predicting or characterizing system events, without providing useful interpretations of causation between the system events. Described techniques use the representational nature of neural networks to perform intervention testing using the neural network, distinguish confounding events, and identify a probabilistic root cause of the investigated event.
A system, method, and computer program product for intelligent-skills-matching includes receiving a plurality of tickets, where each ticket in the plurality of tickets includes a plurality of fields and at least one agent who resolved the ticket is identified. A clustering algorithm is used on one or more of the plurality of fields to determine skills from the plurality of tickets. A taxonomy of the skills is generated using a taxonomy-construction algorithm. Using the taxonomy of the skills, a skills matrix or a skills knowledge graph is created with agents assigned to the skills.
Described systems and techniques determine causal associations between events that occur within an information technology landscape. Individual situations that are likely to represent active occurrences requiring a response may be identified as causal event clusters, without requiring manual tuning to determine cluster boundaries. Consequently, it is possible to identify root causes, analyze effects, predict future events, and prevent undesired outcomes, even in complicated, dispersed, interconnected systems.
Described systems and techniques determine causal associations between events that occur within an information technology landscape. Individual situations that are likely to represent active occurrences requiring a response may be identified as causal event clusters, without requiring manual tuning to determine cluster boundaries. Consequently, it is possible to identify root causes, analyze effects, predict future events, and prevent undesired outcomes, even in complicated, dispersed, interconnected systems.
H04L 41/0631 - Management of faults, events, alarms or notifications using root cause analysisManagement of faults, events, alarms or notifications using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
H04L 41/12 - Discovery or management of network topologies
Described systems and techniques determine causal associations between events that occur within an information technology landscape. Individual situations that are likely to represent active occurrences requiring a response may be identified as causal event clusters, without requiring manual tuning to determine cluster boundaries. Consequently, it is possible to identify root causes, analyze effects, predict future events, and prevent undesired outcomes, even in complicated, dispersed, interconnected systems.
H04L 41/0631 - Management of faults, events, alarms or notifications using root cause analysisManagement of faults, events, alarms or notifications using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
H04L 41/069 - Management of faults, events, alarms or notifications using logs of notificationsPost-processing of notifications
H04L 41/12 - Discovery or management of network topologies
Described systems and techniques determine causal associations between events that occur within an information technology landscape. Individual situations that are likely to represent active occurrences requiring a response may be identified as causal event clusters, without requiring manual tuning to determine cluster boundaries. Consequently, it is possible to identify root causes, analyze effects, predict future events, and prevent undesired outcomes, even in complicated, dispersed, interconnected systems.
H04L 41/0631 - Management of faults, events, alarms or notifications using root cause analysisManagement of faults, events, alarms or notifications using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
H04L 41/12 - Discovery or management of network topologies
Described systems and techniques determine causal associations between events that occur within an information technology landscape. Individual situations that are likely to represent active occurrences requiring a response may be identified as causal event clusters, without requiring manual tuning to determine cluster boundaries. Consequently, it is possible to identify root causes, analyze effects, predict future events, continuously generate a knowledge graph, and prevent undesired outcomes, even in complicated, dispersed, interconnected systems.
Described systems and techniques determine causal associations between events that occur within an information technology landscape. Individual situations that are likely to represent active occurrences requiring a response may be identified as causal event clusters, without requiring manual tuning to determine cluster boundaries. Consequently, it is possible to identify root causes, analyze effects, predict future events, and prevent undesired outcomes, even in complicated, dispersed, interconnected systems.
H04L 41/0631 - Management of faults, events, alarms or notifications using root cause analysisManagement of faults, events, alarms or notifications using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
H04L 41/12 - Discovery or management of network topologies
Information technology service management (ITSM) incident reports are converted from textual data to multiple vectors using an encoder and parameters are selected, where the parameters include a base cluster number and a threshold value. A base group of clusters is generated using an unsupervised machine learning clustering algorithm with the vectors and the parameters as input. A cluster quality score is computed for each of the base group of clusters. Each cluster from the base group of clusters with the cluster quality score above the threshold value is recursively split into new clusters until the cluster quality score for each cluster in the new clusters is below the threshold value. A final group of clusters is output, where each cluster from the final group of clusters represents ITSM incident reports related to a same problem.
Large numbers of files having widely varying structures and formats may be ingested, and used to generate dynamic slot indexes that enable fast and reliable searching of the files. Unique data patterns within the files are used to generate unique pattern models, which enable model-specific mappings of file fields to slots of a dynamic slot index. Accordingly, the dynamic slot indexes may reuse a single slot for multiple fields. Complex queries may then be processed in a time-efficient and resource-efficient manner, even when rapidly ingesting huge numbers of files having indeterminate data patterns from many different sources.
According to one general aspect, a non-transitory computer readable medium includes instructions that, when executed by at least one processor, cause a computing device to read a string of a log file for an application, where the log file comprises multiple strings of log data, compare the string to signatures stored in a memory to find a matching signature, where each of the signatures is encoded with a signature identifier (ID), determine a deviation between the string and the matching signature, encode the string with the signature identifier (ID) of the matching signature and the deviation, and transfer the string to a destination computing device using the signature identifier (ID) of the matching signature, the deviation, and a timestamp of the string.
G06F 9/38 - Concurrent instruction execution, e.g. pipeline or look ahead
G06F 16/17 - Details of further file system functions
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
G06F 9/30 - Arrangements for executing machine instructions, e.g. instruction decode
29.
Use of graph databases for replicating topology and enriching monitoring data streams in complex information technology systems
The systems and techniques include storing topology from each of a plurality of source tools as a plurality of source subgraphs in a graph database using a source schema that mirrors a source topology model for each of the plurality of source tools. Each of the plurality of source subgraphs in the graph database is transformed to a plurality of destination subgraphs using a destination schema and transformation rules that mirror a destination topology model for each of a plurality of destination tools. The plurality of destination subgraphs are stored in the graph database. The topology is delivered to each of the plurality of destination tools by traversing the plurality of destination subgraphs in the graph database and invoking application programming interfaces (APIs) for each of the plurality of destination tools in the destination subgraphs in the graph database.
Described systems and techniques enable prediction of a state of an application at a future time, with high levels of accuracy and specificity. Accordingly, operators may be provided with sufficient warning to avert poor user experiences. Unsupervised machine learning techniques may be used to characterize current states of applications and underlying components in a standardized manner. The resulting data effectively provides labelled training data that may then be used by supervised machine learning algorithms to build state prediction models. Resulting state prediction models may then be deployed and used to predict an application state of an application at a specified future time.
A method for determining a misconfiguration of components in an Information Technology (IT) infrastructure includes decomposing one or more components into sub parts, creating one or more synthetic objects, each synthetic object being associated with a sub part of a respective component, and including the components and the synthetic objects in a model of the IT infrastructure. The method further determines a relationship between a first component and a first synthetic object based on attributes of the first component and attributes of the first synthetic object, includes the determined relationship in the model of the IT infrastructure, and loads a graph of the IT infrastructure in a graph database with the first component and the synthetic object as nodes and the determined relationship as an edge in the graph. The method further determines the misconfiguration of components in the IT infrastructure by identifying components having improper relationships in the graph.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
H04L 41/0873 - Checking configuration conflicts between network elements
H04L 41/0853 - Retrieval of network configurationTracking network configuration history by actively collecting configuration information or by backing up configuration information
H04L 41/5041 - Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the time relationship between creation and deployment of a service
H04L 69/163 - In-band adaptation of TCP data exchangeIn-band control procedures
H04L 41/12 - Discovery or management of network topologies
32.
Search data curation and enrichment for deployed technology
A content engine may utilize a configuration management database (CMDB) to manage a configuration of a technology landscape. A curation manager 102 may utilize a plurality of article sources to provide, in collaboration with the content engine, a plurality of enriched articles that are specific to the technology landscape. The enriched articles enable an IT administrator using the content engine to execute IT administration duties in a fast, efficient, reliable, and timely manner.
A cloud-native proxy gateway is reachable from a central server and from an isolated cloud VM. A method allows legacy (non-cloud native) solutions to establish a secure connection to the isolated cloud VM, even when incoming port flows are not enabled. The method involves transforming a TCP/IP network connection request into a cloud API call, ignoring IP addresses, and instead using a unique cloud resource identifier as the primary network routing methodology. In response to a communication connection request by the central server, the isolated VM establishes a reverse tunnel to the cloud-native proxy gateway. Communication flow initiated by the central server proceeds through the reverse tunnel to the isolated VM, avoiding an issue of duplicate IP addresses in the cloud.
A non-transitory computer-readable storage medium may comprise instructions for determining health statuses of multiple virtual machine templates stored thereon. When executed by at least one processor, the instructions may be configured to cause a health status server to at least run multiple scripts against multiple virtual machines, each of the multiple virtual machines being generated from one of the multiple virtual machine templates, and generate, for each of the multiple virtual machines, an output report indicating success or failure for each of the multiple scripts.
A method for securing a networked computer system executing an application includes identifying a vulnerable computer resource in the networked computer system, determining all computer resources in the networked computer system that are accessible from, or are accessed by, the vulnerable computer resource, and prioritizing implementation of a remediation action to secure the vulnerable computer resource if a vulnerability path extends from the vulnerable computer resource to a critical computer resource that contains sensitive information. The remediation action to secure the vulnerable computer resource is a safe remediation action that does not impact availability of the application executing on the networked computer system.
A method for troubleshooting abnormal behavior of an application hosted on a networked computer system. The method may be implemented by a root cause analyzer. The method includes tracking a single application performance metric across all the clients of an application hosted on a networked computer system and analyzing an aggregated application based on the single application metric. The method involves determining outlier client attributes associated with an abnormal transaction of the application and ranking the outlier client attributes based on comparisons of historical and current abnormal transactions. The method associates one or more of the ranked outlier client attributes with the root cause of the current abnormal transaction. Association rule learning is used to associate one or more of the ranked outlier client attributes with the root cause.
A computer system includes a processor, a memory, a data collector, a relationships analyzer, and a topological map generator. The data collector retrieves performance data in a specific set of performance categories for computing resources in a computing system for a time interval. The relationships analyzer, for each computing resource-to-computing resource pair in the computing system, performs a correlation analysis of the respective behavior values of the computing resources in the pair, and identifies the computing resource-to-computing resource pairs that have correlation values exceeding a pre-determined threshold level as having performance interdependencies. The topological map generator prepares an undirected graph of the computing resources that have performance interdependencies, and displays the undirected graph as a topographic map of the computing resources in the computing system.
Systems and techniques for identifying a common change window for one or more services implemented on one or more hosts include querying time series performance data for each host of a service to identify time slots of low resource consumption on the host, annotating the time slots with service tags, where the service tags identify host information and service information, creating groups of time slots using the service tags, using dynamic clustering to create clusters of hosts using the groups of time slots, and generating at least one common change window by eliminating duplicate hosts from the clusters of the hosts.
One example system includes an export engine to generate an environment agnostic configuration file and an environment properties data structure based on a server program executing in the environment. The environment-agnostic configuration file includes representations of a set of environment dependent attributes from the set of configuration information, each representation for an environment dependent attribute including at least one token that replaces a value of the attribute in the representation, and representations of members of a set of environment independent attributes from the set of configuration information that are equivalent between two different environments. The environment properties data structure has, for each environment, a value that corresponds to the at least one token. An example system may compare previously generated files with current files to identify differences. Differences that represent malicious changes can trigger restoration of the configuration using the previously generated files.
A method for determining a misconfiguration of components in an Information Technology (IT) infrastructure includes decomposing one or more components into sub parts, creating one or more synthetic objects, each synthetic object being associated with a sub part of a respective component, and including the components and the synthetic objects in a model of the IT infrastructure. The method further determines a relationship between a first component and a first synthetic object based on attributes of the first component and attributes of the first synthetic object, includes the determined relationship in the model of the IT infrastructure, and loads a graph of the IT infrastructure in a graph database with the first component and the synthetic object as nodes and the determined relationship as an edge in the graph. The method further determines the misconfiguration of components in the IT infrastructure by identifying components having improper relationships in the graph.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
H04L 12/24 - Arrangements for maintenance or administration
H04L 29/06 - Communication control; Communication processing characterised by a protocol
41.
Cooperative naming for configuration items in a distributed configuration management database environment
A first datastore discovers a configuration item (CI), without a persistent unique identifier in a distributed datastores environment. When the first datastore has authoritative naming rights, it determines an authoritative identification for the CI. When the first datastore has advisory naming rights, it suggests a name for the CI to a second datastore having authoritative naming rights. The second datastore determines that a pre-existing identification for the CI in the second datastore is the authoritative identification for the CI. If there is no pre-existing identification for the CI in the second data store, the second data store accepts the suggested name as the authoritative identification for the CI. When the first datastore has no naming rights for the CI, it sends the CI to a third data store having authoritative naming rights for the CI to get an authoritative identification for the CI.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
42.
Creative and additive reconciliation of data records
A data management system includes a data reconciliation engine that identifies data sources that contain data records referencing a resource and determines whether each of the identified data sources is a creative data source or an additive data source. When all of the identified data sources are additive data sources, the reconciliation engine terminates a data reconciliation process. When all of the identified data sources are not additive data sources, the reconciliation engine finds a first creative data source from among the identified data sources, and initiates the data reconciliation process by merging data from the identified data sources including the first creative data source, one data source-by-one data source, into a reconciled data record.
G06F 7/14 - Merging, i.e. combining at least two sets of record carriers each arranged in the same ordered sequence to produce a single set having the same ordered sequence
43.
Methods and apparatus related to graph transformation and synchronization
In one general aspect, a computer system can include instructions configured to store on a non-transitory computer-readable storage medium. The computer system can include a subgraph transformer configured to transform a plurality of subgraphs of a source graph into a plurality of transformed subgraphs, and configured to define a target graph that is a transformed version of the source graph based on the plurality of transformed subgraphs. The computer system can include a change detector configured to receive an indicator that a portion of the source graph has been changed, and a synchronization module configured to synchronize a portion of the target graph with the changed portion of the source graph.
G06F 17/30 - Information retrieval; Database structures therefor
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
G06F 16/25 - Integrating or interfacing systems involving database management systems
A plurality of virtual machines executing on physical machines may be monitored, and performance data characterizing consumption of physical resources of the physical machines by the plurality of virtual machines during the observation time may be extracted. Each of the plurality of virtual machines may be classified as active or idle during each time division of a plurality of time divisions of the observation time, based on the performance data and on idleness criteria, to thereby generate an active-idle series for each of the plurality of virtual machines. For each active-idle series of each virtual machine of the plurality of virtual machines, at least one periodicity of recurring idle times within the observation time may be determined. Then, for each virtual machine with the at least one periodicity, an on-off schedule may be determined, and each of the virtual machines may be transitioned with the at least one periodicity between an on state and an off state in accordance with the on-off schedule.
A method for securing a service implemented on a computer network includes identifying network assets in the computer network used by the service. The method further includes identifying vulnerabilities in one or more of the network assets, determining an asset risk score for each of the network assets, and determining a service risk score for the service. The method involves implementing one or more vulnerability remediation actions on the computer network to reduce the service risk score and secure the service.
A method includes receiving a floor map indicating a layout of a location, displaying at least a portion of the floor map, capturing signal strength data representing a signal field for at least one position on the floor map, identifying an asset within the layout of the location, determining at least one property that identifies the asset using one of a discovery process using a wireless protocol and an image processing application programming interface (API) configured to classify an image and detect individual within the image, updating the floor map with the asset and the at least one property, and communicating the asset and the at least one property to the remote computing device.
H04W 4/33 - Services specially adapted for particular environments, situations or purposes for indoor environments, e.g. buildings
H04W 4/80 - Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
H04W 4/02 - Services making use of location information
One example system includes an export engine to generate an installation atomic for a source program based on a source environment. The installation atomic can include an environment agnostic configuration file, an environment properties data structure, and compiled binary artifacts created based on the server program. The environment-agnostic configuration file includes representations of a set of environment dependent attributes from the set of configuration information, each representation for an environment dependent attribute including at least one token that replaces a value of the attribute in the representation, and representations of members of a set of environment independent attributes from the set of configuration information that are equivalent between the source environment and target environments. The environment properties data structure has, for each of a plurality of target environments, a value that corresponds to the at least one token.
A graphical representation of a service model provides a full view of a portion of the graphical representation. A sub graph view may be displayed for nodes of the graphical representation of the service model that are associated with a selected node, including nodes that may not be visible in the full view. The sub graph view may be interactive, providing additional information regarding the nodes displayed in the sub graph view, and allowing making nodes in the sub graph view visible or invisible in the full view. Information may be displayed in the sub graph view about the status of the components being modeled by the service model corresponding to nodes displayed in the sub graph view.
G06T 11/20 - Drawing from basic elements, e.g. lines or circles
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
49.
Extensibility of business logic shared across a business process orchestration engine, a rule engine, and a user interface
A method for codeless development of an application includes registering one or more actions in a registry. Each action is coded in a reusable block of code, each action having an action definition including an action type name, an input parameters map, and an output parameters map. The method further includes performing an action type name look up in the registry for an invoked action with an action service ensuring that a number of arguments included in the action definition matches a number of arguments specified by the action type, passing an input to and receiving a return value from the invoked action, updating the output parameters map included in the definition of the invoked action, and returning the updated output parameters map to an application in development for updating processing variables in the application.
A system for rapid deployment of content on a common publication platform. The system includes a rapid content deployment application hosted on a stand-alone or networked computer that is interfaced with the common publication platform. The rapid content deployment application includes a receiver to receive a file for publication on the common publication platform, a file existence checker to verify existence of a collaboration file on the common publication platform compatible with the received file, and a file preparer to prepare the received file for uploading to the common publication platform in compliance with one or more of governance, security, and change management policies including access control and authorization policies. The rapid content deployment application further includes a file uploader to upload the prepared file to the common publication platform for publication.
The embodiments provide an application diagnostics apparatus including an instrumentation engine configured to monitor one or more methods of a call chain of the application in response to a server request according to an instrumentation file specifying which methods are monitored and which methods are associated with a code extension, an extension determining unit configured to determine that at least one monitored method is associated with the code extension based on code extension identification information, a class loading unit configured to load the code extension from a resource file when the at least one monitored method associated with the code extension is called within the call chain, a code extension execution unit configured to execute one or more data collection processes, and a report generator configured to generate at least one report for display based on collected parameters.
A scheduling system for scheduling executions of tasks within a distributed computing system may include a file transfer manager configured to determine a file for transfer from a source location to a target location, the file being associated with file metadata characterizing the file, and with an organization. The file transfer manager may include an orchestrator configured to determine at least two transfer paths for the transfer, including at least a first transfer path utilizing a private wide area network (WAN) of the organization and a second transfer path utilizing a publicly available data hosting service, access transfer metadata characterizing the at least two transfer paths, and access organizational metadata characterizing organizational transfer path usage factors. The file transfer manager also may include a heuristics engine configured to execute path decision logic using the file metadata, the transfer metadata, and the organizational metadata, to thereby select a selected transfer path from the at least two transfer paths.
In accordance with aspects of the disclosure, systems and methods are provided for normalizing data representing entities and relationships linking the entities including defining one or more graph rules describing searchable characteristics for the data representing the entities and relationships linking the entities, applying the one or more graph rules to the data representing the entities and the relationships linking the entities, identifying one or more matching instances between the one or more graph rules and the data representing the entities and the relationships linking the entities, and performing one or more actions to update the one or more matching instances between the one or more graph rules and the data representing the entities and the relationships linking the entities.
An access data collector collects access assignment data characterizing active access assignment operations of a hypervisor in assigning host computing resources among virtual machines for use in execution of the virtual machines. Then, a capacity risk indicator calculator calculates a capacity risk indicator characterizing a capacity risk of the host computing resources with respect to meeting a prospective capacity demand of the virtual machines, based on the access assignment data.
A computer system for classifying one or more servers by server type in a networked computing system to institute server-type based monitoring and or maintenance of the networked computing system. The computer system includes a processor, a memory, a data receiver, a server signature generator, and a server-type tagging service. The data receiver collects server performance data for a first server over a time interval. The server signature generator determines a signature of the first server based on the collected server performance data. The server-type tagging service compares the signature of the first server to a signature of a second server of known server type, determines a similarity of the signature of the first server to the signature of the second server, and, based on the similarity, classifies the first server as being of the same server type as the second server.
A method and system create a model of a set of relationships between a set of parent computer network objects and a set of corresponding child computer network objects, over a period of time, and output a user interface graphing the model in a single view to illustrate the set of relationships over the period of time. The parent computer network objects include virtual machines and the child computer network objects include hosts. The user interface includes a search option to provide for a search of problems with the child computer network objects over the period of time.
G06F 30/18 - Network design, e.g. design based on topological or interconnect aspects of utility systems, piping, heating ventilation air conditioning [HVAC] or cabling
G06F 11/32 - Monitoring with visual indication of the functioning of the machine
H04L 12/24 - Arrangements for maintenance or administration
G06F 9/455 - EmulationInterpretationSoftware simulation, e.g. virtualisation or emulation of application or operating system execution engines
H04L 29/06 - Communication control; Communication processing characterised by a protocol
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
G06F 11/34 - Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation
57.
Statistical identification of instances during reconciliation process
A system for reconciling object for a configuration management databases employs statistical rules to reduce the amount of manual identification required by conventional reconciliation techniques. As users manually identify matches between source and target datasets, statistical rules are developed based on the criteria used for matching. Those statistical rules are then used for future matching. A threshold value is adjusted as the statistical rules are used, incrementing the threshold value when the rule successfully matches source and target objects. If the threshold value exceeds a predetermined acceptance value, the system may automatically accept a match made by a statistical rule. Otherwise, suggestions of possibly applicable rules may be presented to a user, who may use the suggested rules to match objects, causing adjustment of the threshold value associated with the suggested rules used.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
G06F 16/28 - Databases characterised by their database models, e.g. relational or object models
Systems and methods provide automatic discovery of cluster membership based on transaction processing. An example method includes, at a source node of a first tier of nodes, generating a service identifier for a transaction that requests a service hosted by a second tier, the service identifier being based on a logical identifier for the second tier. The method also includes sending the transaction, including the service identifier, from the source node to the service hosted by the second tier. The method includes, at a destination node in the second tier, obtaining the service identifier from the transaction and reporting the service identifier with a destination node identifier to a visibility server as cluster information. The method also includes, at the visibility server, receiving cluster information from a plurality of destination nodes and assigning each of the plurality of destination nodes to a cluster based on the service identifiers.
The method may include collecting performance data relating to processing nodes of a computer system which provide services via one or more applications, analyzing the performance data to generate an operational profile characterizing resource usage of the processing nodes, receiving a set of attributes characterizing expected performance goals in which the services are expected to be provided, and generating at least one provisioning policy based on an analysis of the operational profile in conjunction with the set of attributes. The at least one provisioning policy may specify a condition for re-allocating resources associated with at least one processing node in a manner that satisfies the performance goals of the set of attributes. The method may further include re-allocating, during runtime, the resources associated with the at least one processing node when the condition of the at least one provisioning policy is determined as satisfied.
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
H04L 12/911 - Network admission control and resource allocation, e.g. bandwidth allocation or in-call renegotiation
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
An environment for facilitating the management of content for users associated with specific partner networks is provided. Users may be granted access to such specific partner networks in accordance with each user's affiliation with one or more organizations. In accordance with the above, a content management system facilitates the content/information exchange by accepting software applications from content providers. Additionally, the content management system accepts software application specifications or manifests from partner network administrators. Accordingly, the content management system can audit and recommend actions to users regarding applicable software application based one user organizational associations. Still further, the content management system can facilitate requests from affiliated users for specific types of content that can be forwarded to content providers and later made available to affiliated users.
G06Q 50/00 - Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
An example system may include one or more collectors and an analyzer. The one or more collectors receive a plurality of data streams that include operational data for a plurality of application nodes. The plurality of data streams are captured and provided by a plurality of meters deployed on at least one cloud computing platform to respectively meter the plurality of application nodes. The analyzer processes the plurality of data streams to generate real-time performance data for a first application of a plurality of applications and generates, based on the real-time performance data for the application instances, statistics for data flows between components of the first application. The analyzer generates comparative statistics on the performance of the first application relative to the performance of the plurality of applications hosted, and reallocated, based on the comparative statistics, resources for the performance of the first application.
A method of administering a computing system, including a plurality of computing devices. The method includes selecting an application for download to a computing device, prior to downloading the application, decompiling the application, searching for string patterns in the decompiled application, replacing the string patterns in the decompiled application with another string pattern, the another string pattern being configured to intercept at least one of a system event or an Application Programming Interface (API) call, and associating logic with the application. The logic is configured to interact with the application via the at least one system event or API call, the logic is configured to provide additional functions to the application, the logic is configured to be shared between the application and at least one other application, and the logic is stored separate from the application.
According to one general aspect, a method of using a first probing device may include monitoring one or more encrypted communications sessions between a first computing device and a second computing device. In some implementations of the method, each encrypted communications session includes transmitting a plurality of encrypted data objects between the first and second computing devices. The method may include deriving, by the first probing device, timing information regarding an encrypted communications session. The method may also include transmitting, from the first probing device to a second probing device, the derived timing information.
A modeling system has a database that: stores information of resources of a computer network service. A server f has a graphical user interface application for creating and editing service models. The application receives user-entered search criteria and searches information in the database based on the criteria. The search criteria can include a name, type, attribute, and other information of the resources. In addition, the search criteria can be a user-entered search query entered that has one or more logical or Boolean conditions relating resource attributes to attribute values. Using information obtained through searching, the application is used to create at least a portion of a service model of the computer network service. Once created, the application is used to initiate publishing of at least a portion of the service model to one or more impact managers of the computer network service.
A non-transitory computer-readable storage medium may include instructions stored thereon for ranking multiple computer modules to reduce failure impacts. When executed by at least one processor, the instructions may be configured to cause a computing system implementing the multiple computer modules to at least associate the multiple computer modules with multiple services that rely on the multiple computer modules, at least one of the multiple services relying on more than one of the multiple computer modules, determine values of the multiple services, and rank the multiple computer modules based on the determined values of the multiple services with which the respective multiple computer modules are associated.
Disclosed are methods and systems to provide coordinated identification of data items across a plurality of distributed data storage repositories (datastores). In one disclosed embodiment, a single configuration management database (CMDB) controls identification rights for all CIs as they are first identified in a master/slave relationship with all other CMDBs in the distributed environment. In a second embodiment, a plurality of CMDBs divide identification rights based upon coordination identification rules where certain CMDBs are assigned authoritative identification rights for CIs matching the rules of a particular CMDB in the distributed environment. In a third embodiment, one or more of the plurality of CMDBs may also have advisory identification rights for CIs which do not already have an identifiable unique identity and can coordinate with an authoritative CMDB to establish an identity for CIs.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
computer software for the remote collection of data from IT operations management (ITOM) software; Computer software for integrating IT operations management (ITOM) software software as a service (SAAS) services featuring software for the remote collection of data from IT operations management (ITOM) software; software as a service (SAAS) services featuring software for integrating IT operations management (ITOM) software
68.
Cloud service interdependency relationship detection
A computer system includes a processor, a memory, a data collector, a relationships analyzer, and a topological map generator. The data collector retrieves performance data in a specific set of performance categories for computing resources in a computing system for a time interval. The relationships analyzer, for each computing resource-to-computing resource pair in the computing system, performs a correlation analysis of the respective behavior values of the computing resources in the pair, and identifies the computing resource-to-computing resource pairs that have correlation values exceeding a pre-determined threshold level as having performance interdependencies. The topological map generator prepares an undirected graph of the computing resources that have performance interdependencies, and displays the undirected graph as a topographic map of the computing resources in the computing system.
A computer system for behavioral analytics of native Information Technology Service Management (ITSM) incident handling data includes a processor, a memory, a de-normalized target data source for behavioral analysis, a transformation processor, and a statistical processor. The transformation processor reads an identified portion of the ITSM data and creates new normalized fields for the de-normalized target data source by parsing selected text fields from the portion of ITSM data. The created new normalized fields include a working group field and an associated support level field. The transformation processor further creates new de-normalized aggregation fields for the incipient de-normalized target data source based on the newly created normalized fields. The newly created de-normalized aggregation fields include fields characterizing incident handling behavior. A statistical processor further processes target data for behavioral analytics. The transformation processor populates the target data source's de-normalized data fields with aggregated incident handling data and behavioral characterizations.
A container set manager may determine a plurality of container sets, each container set specifying a non-functional architectural concern associated with deployment of a service within at least one data center. A decision table manager may determine a decision table specifying relative priority levels of the container sets relative to one another with respect to the deployment. A placement engine may determine an instance of an application placement model (APM), based on the plurality of container sets and the decision table, determine an instance of a data center placement model (DPM) representing the at least one data center, and generate a placement plan for the deployment, based on the APM instance and the DPM instance.
A generic discovery methodology collects data pertaining to components of a computer network using various discovery technologies. From the collected data, the methodology identifies, filters and analyzes information related to inter-component communications. Using the communication and application information, the methodology determines reliable relationships for those components having sufficient information available. To qualify more components, the methodology implements a decision service to generate hypothetical relationships between components that are known and components that are unqualified or unknown. The hypothetical relationships are presented to a user for selection, and each hypothetical relationship is preferably associated with an indication of its reliability.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
Detection of anomalous events in the operation of information technology (IT) components includes receiving messages, which describe events in the operation of the IT components in real time, and categorizing and condensing the messages received in a first time interval into message patterns by message pattern type. Based on a distribution of occurrences of the message patterns in the first time interval and in preceding time intervals, anomaly scores are assigned to the message patterns, and one or more of the message patterns are classified as being anomalous message patterns that correspond to potentially anomalous events in the operation of the IT infrastructure installation. A degree of correlation between occurrences of the anomalous message patterns and occurrences of application alarms is determined. Message patterns with high anomaly scores and having a high degree of correlation with application alarms are deemed significant and prioritized for display to users.
The method may include collecting performance data relating to processing nodes of a computer system which provide services via one or more applications, analyzing the performance data to generate an operational profile characterizing resource usage of the processing nodes, receiving a set of attributes characterizing expected performance goals in which the services are expected to be provided, and generating at least one provisioning policy based on an analysis of the operational profile in conjunction with the set of attributes. The at least one provisioning policy may specify a condition for re-allocating resources associated with at least one processing node in a manner that satisfies the performance goals of the set of attributes. The method may further include re-allocating, during runtime, the resources associated with the at least one processing node when the condition of the at least one provisioning policy is determined as satisfied.
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
H04L 12/911 - Network admission control and resource allocation, e.g. bandwidth allocation or in-call renegotiation
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
A view generator receives support text characterizing a support requirement for available information technology (IT) support, the support text being received in sentence form via a graphical user interface (GUI). A text analyzer performs natural language processing on the support text and thereby identifies at least one sentence part and at least one named entity within the support text. A support record generator relates each of the at least one sentence part and the at least one named entity to a support record type, and generates a support data record for the support requirement, including filling individual fields of the support data record using the at least one sentence part and the at least one named entity.
Systems and methods provide automatic discovery of cluster membership based on transaction processing. An example method includes, at a source node of a first tier of nodes, generating a service identifier for a transaction that requests a service hosted by a second tier, the service identifier being based on a logical identifier for the second tier. The method also includes sending the transaction, including the service identifier, from the source node to the service hosted by the second tier. The method includes, at a destination node in the second tier, obtaining the service identifier from the transaction and reporting the service identifier with a destination node identifier to a visibility server as cluster information. The method also includes, at the visibility server, receiving cluster information from a plurality of destination nodes and assigning each of the plurality of destination nodes to a cluster based on the service identifiers.
A system includes, for each individual data center of a multiplex data center, a collector component, a local data repository, and a model building component. The collector component collects performance metrics of a computing workload running in the each individual data center of the multiplex data center and stores the collected performance metrics in the local data repository. The model building component builds a respective individual model of data center resource use for each individual CPC in the individual data center using the stored performance metrics. The system further includes a model merging component configured to receive and combine the individual CPC models created by the model building components for the individual data centers into a single multiplex data center model applicable to the computing workload across the multiplex data center.
An example system may include one or more collectors and an analyzer. The one or more collectors receive a plurality of data streams that include operational data for a plurality of application nodes. The plurality of data streams are captured and provided by a plurality of meters deployed on at least one cloud computing platform to respectively meter the plurality of application nodes. The analyzer processes the plurality of data streams to generate real-time performance data for a first application of a plurality of applications and generates, based on the real-time performance data for the application instances, statistics for data flows between components of the first application. The analyzer generates comparative statistics on the performance of the first application relative to the performance of the plurality of applications hosted, and reallocated, based on the comparative statistics, resources for the performance of the first application.
Processes and integrations include a method for managing a business process application development lifecycle. The method includes initiating, in a planning stage, requirements for an application based on adding new features to the application or a new application, implementing, in a development stage, a service process node (SPN) as a business process, and managing, in an operations stage, software code representing the application in a production environment. The SPN is configured to encapsulate at least one business service object and generate an interface configured to expose internal processes of the at least one business service object.
Overlay datasets provide an efficient, flexible and scalable mechanism to represent the logical replication of one or more prior defined datasets. Only changes made to an entity in an overlay dataset's underlying dataset are replicated into the overlay dataset (such changes do not affect the underlying dataset). Read operations directed to the overlay dataset will find entities in the overlay dataset if they exist and in the underlying dataset(s) if no overlay-specific entity exists. Accordingly, overlay datasets provide an efficient mechanism for making changes to an existing dataset without suffering the high processing time and storage overhead associated with prior art copying and versioning techniques. Overlay datasets also provide a natural mechanism to keep two or more datasets in synchronization because changes to a base or underlying dataset's entities are “visible” in its associated overlay dataset (unless the entity has been modified in the overlay dataset).
Overlay datasets provide an efficient, flexible and scalable mechanism to represent the logical replication of one or more prior defined datasets. Only changes made to an entity in an overlay dataset's underlying dataset are replicated into the overlay dataset (such changes do not affect the underlying dataset). Read operations directed to the overlay dataset will find entities in the overlay dataset if they exist and in the underlying dataset(s) if no overlay-specific entity exists. Accordingly, overlay datasets provide an efficient mechanism for making changes to an existing dataset without suffering the high processing time and storage overhead associated with prior art copying and versioning techniques. Overlay datasets also provide a natural mechanism to keep two or more datasets in synchronization because changes to a base or underlying dataset's entities are “visible” in its associated overlay dataset (unless the entity has been modified in the overlay dataset).
Techniques are described to allow the deprecation of classes in an object-oriented data model, such as a CDM for a CMDB. When a class is deprecated and replaced by another existing or new class, data associated with instances of the deprecated class may be migrated to the replacement class. A mapping between the deprecated class and its replacement class may be provided to allow existing applications to continue to access data using the deprecated class without change until the deprecated class is finally deleted or the application is updated to use the replacement class. New applications written to use the object-oriented data model after the deprecation may use the replacement class to access data instances created using the original data model.
According to one general aspect, a method may include receiving a data query request that includes one or more search parameters to be searched for within a plurality of files that are stored according to a hierarchical organizational structure, wherein each file includes at least one data record. The method may include scanning a plurality of files to determine if one or more files match a sub portion of the search parameters. The method may further include parsing the candidate files to determine which, if any, records included by the respective candidate files meet the search parameters. The method may include generating, by one or more result analyzers, query results from the resultant data. The method may also include streaming, to the user device, the query results as at least one query result becomes available and to start streaming before the query requests have been fully generated.
A graphical representation of a service model provides a full view of a portion of the graphical representation. A sub graph view may be displayed for nodes of the graphical representation of the service model that are associated with a selected node, including nodes that may not be visible in the full view. The sub graph view may be interactive, providing additional information regarding the nodes displayed in the sub graph view, and allowing making nodes in the sub graph view visible or invisible in the full view. Information may be displayed in the sub graph view about the status of the components being modeled by the service model corresponding to nodes displayed in the sub graph view.
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06T 11/20 - Drawing from basic elements, e.g. lines or circles
84.
Provisioning of containers for virtualized applications
In a general aspect, a computer-implemented method can include receiving a request to provision a plurality of containers of an application across a plurality of data center hosts and iteratively placing the plurality of containers on the plurality of data center hosts. The containers can be selected for placement based on one of a locality constraint and an association with previously-placed containers. Placement of a selected container can be based on, at least, compute requirements of the selected container, network requirements of the selected container, configuration of the plurality of data center hosts, and performance metrics for the plurality of data center hosts.
An authentication engine may be configured to receive an authentication request and credentials from a client. The authentication engine may then generate a proxy agent configured to interact with an identity provider to authenticate the client on behalf of the client, using the credentials. In this way, the authentication engine may receive an assertion of authentication of the client from the identity provider, by way of the proxy agent.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04W 12/04 - Key management, e.g. using generic bootstrapping architecture [GBA]
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
H04L 9/00 - Arrangements for secret or secure communicationsNetwork security protocols
In accordance with aspects of the disclosure, systems and methods are provided for normalizing data representing entities and relationships linking the entities including defining one or more graph rules describing searchable characteristics for the data representing the entities and relationships linking the entities, applying the one or more graph rules to the data representing the entities and the relationships linking the entities, identifying one or more matching instances between the one or more graph rules and the data representing the entities and the relationships linking the entities, and performing one or more actions to update the one or more matching instances between the one or more graph rules and the data representing the entities and the relationships linking the entities.
Disclosed is a method, a system and a computer readable medium for additive independent object modification. The method includes determining an association between an independent object modification and a base object of a software application, modifying at least one element of the base object based on the associated independent object modification, and configuring the software application to execute in a computer system using the modified base object.
A metadata framework helps enforce referential integrity in object data documents. In one general aspect, a method includes generating a first data definition language statement, based on a class defined in a metadata framework, that creates a table in a relational database system to store an object data document. The table may include at least one column that corresponds to an identifying attribute in the object data document, at least one column that corresponds to a relationship attribute in the object data document, and a column that stores the object data document. The method may also include generating a second data definition language statement, based on the referential integrity metadata framework, that creates a foreign key constraint on the at least one column that corresponds to the relationship attribute when the relationship is not polymorphic, and issuing the first data definition language statement and the second data definition language statement.
A method to reconcile multiple instances of a single computer resource identified by resource discovery operations includes: (1) accessing information describing one or more resources; (2) identifying, via the accessed information, at least one resource that has been detected or discovered by at least two of the discovery operations; and (3) merging attributes associated with the identified resource from each of the at least two discovery operations into a single, reconciled resource object. Illustrative “resources” include, but are not limited to, computer systems, components of computer systems, data storage systems, switches, routers, memory, software applications (e.g., accounting and database applications), operating systems and business services (e.g., order entry or change management and tracking services).
G06F 7/32 - Merging, i.e. combining data contained in ordered sequence on at least two record carriers to produce a single carrier or set of carriers having all the original data in the ordered sequence
G06F 7/20 - Comparing separate sets of record carriers arranged in the same sequence to determine whether at least some of the data in one set is identical with that in the other set or sets
G06F 7/14 - Merging, i.e. combining at least two sets of record carriers each arranged in the same ordered sequence to produce a single set having the same ordered sequence
A method includes receiving a map indicating a layout of a location, receiving a point-of-interest (POI) data structure representing a POI, and POI metadata associated with the POI, generating an annotated floor map, based on the map, the annotated floor map including a POI indicator placed on the map at the location of the POI, the POI indicator indicating the type of the POI and the status of the POI, displaying at least a portion of the annotated floor map, in response to the client computing device moving within the location or out of the location, transmitting location information to a map selector and receiving one or more maps selected by the map selector, the one or more maps include or bound by the location information.
G06F 17/00 - Digital computing or data processing equipment or methods, specially adapted for specific functions
G01C 21/20 - Instruments for performing navigational calculations
G06T 11/20 - Drawing from basic elements, e.g. lines or circles
H04W 4/70 - Services for machine-to-machine communication [M2M] or machine type communication [MTC]
H04W 4/80 - Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
In a general aspect, a system can include a user interface with at least one input field for receiving input associated with an information technology (IT) customer service issue and a response area for displaying results in response to the input. The system can further include a context generation engine that receives the input associated with the IT customer service issue from the user interface and determines, based on the input, a multi-factor context. The system can also include a relevance-based search engine configured to search, based on the multi-factor context, a plurality of resources; assign, based on the multi-factor context, a respective relevancy score to each of the plurality of resources; and provide, to the user interface for display in the results area, a ranked list of a subset of the plurality of resources that is ordered based on the respective relevancy scores of the subset of the resources.
A management system for determining causal relationships among system entities may include a causal relationship detector configured to receive events from a computing environment having a plurality of entities, and detect causal relationships among the plurality of entities, during runtime of the computing environment, based on the events, and a rules converter configured to convert one or more of the causal relationships into at least one behavioral rule. The at least one behavioral rule may indicate a causal relationship between at least two entities of the plurality of entities.
A method of administering a computing system, including a plurality of computing devices. The method includes selecting an application for inclusion in a menu of applications downloadable to a computing device and interposing a wrapper on the application before the computing device downloads the application, the wrapper being configured to control an operation of the application. Interposing the wrapper on the application includes decompiling the application, searching for string patterns, and replacing the string patterns with another string pattern, the another string pattern being configured to intercept at least one of a system event or an Application Programming Interface (API) call and associating logic with the application. The logic is configured to interact with the application via the at least one system event or API call, the logic is configured to provide additional functions to the application, and the logic is stored separate from the application.
A non-transitory computer-readable storage medium may comprise instructions for managing a server template stored thereon. When executed by at least one processor, the instructions may be configured to cause at least one computing system to at least convert the server template to a corresponding virtual machine, manage the corresponding virtual machine, and convert the corresponding virtual machine back into a template format.
In one general aspect, a method can include creating an action, the creating including annotating a block of code with metadata, and encapsulating the annotated block of code into a reusable building block of code. The method can further include publishing the action, the publishing including registering the action in a service registry. The method can further include dynamically discovering the action in the service registry by an application during runtime, invoking the action by the application, and executing the action by the application, the executing performing a method specified by the action.
An access data collector collects access assignment data characterizing active access assignment operations of a hypervisor in assigning host computing resources among virtual machines for use in execution of the virtual machines. Then, a capacity risk indicator calculator calculates a capacity risk indicator characterizing a capacity risk of the host computing resources with respect to meeting a prospective capacity demand of the virtual machines, based on the access assignment data.
An access data collector collects access assignment data characterizing active access assignment operations of a hypervisor in assigning host computing resources among virtual machines for use in execution of the virtual machines. Then, a capacity risk indicator calculator calculates a capacity risk indicator characterizing a capacity risk of the host computing resources with respect to meeting a prospective capacity demand of the virtual machines, based on the access assignment data.
An access data collector collects access assignment data characterizing active access assignment operations of a hypervisor in assigning host computing resources among virtual machines for use in execution of the virtual machines. Then, a capacity risk indicator calculator calculates a capacity risk indicator characterizing a capacity risk of the host computing resources with respect to meeting a prospective capacity demand of the virtual machines, based on the access assignment data.
A non-transitory computer-readable storage medium may include instructions stored thereon for ranking multiple computer modules to reduce failure impacts. When executed by at least one processor, the instructions may be configured to cause a computing system implementing the multiple computer modules to at least associate the multiple computer modules with multiple services that rely on the multiple computer modules, at least one of the multiple services relying on more than one of the multiple computer modules, determine values of the multiple services, and rank the multiple computer modules based on the determined values of the multiple services with which the respective multiple computer modules are associated.
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Computer software; operating system utility computer programs; computer software and programs for managing computer systems, databases and applications, namely providing data management, application management and performance optimization and recovery of mainframe and distributed systems computers and the database and business applications, programs and systems that operate therein; all of the aforesaid goods not being for optical, signalling, checking (supervision) and lifesaving apparatus and instruments, alarms, optical signal transmitter, and acoustic signal transmitter. Software as a service (SaaS) services; software as a service (Saas) services featuring operating system utility computer programs; Software as a service (Saas) services featuring software and programs for managing computer systems, databases and applications, namely providing data management, application management and performance optimization and recovery of mainframe and distributed systems computers and the database and business applications, programs and systems that operate therein; all of the aforesaid services not being for optical, signalling, checking (supervision) and lifesaving apparatus and instruments, alarms, optical signal transmitter, and acoustic signal transmitter.