09 - Scientific and electric apparatus and instruments
35 - Advertising and business services
38 - Telecommunications services
42 - Scientific, technological and industrial services, research and design
Goods & Services
Teaching apparatus; computer software applications,
downloadable; computer programs, recorded; computer game
software, downloadable; computer programs, downloadable;
monitors [computer hardware]; computer software, recorded;
computer operating programs, recorded; electronic
publications, downloadable; software platforms, recorded or
downloadable; mouse pads; processors [central processing
units]; radar apparatus; data processing apparatus; readers
[data processing equipment]; USB flash drives; chips
[integrated circuits]; data sets, recorded or downloadable;
scanners for data processing; smartphones; mobile
telephones; electronic key fobs being remote control
apparatus; gauges; detectors; transmitters
[telecommunication]; transmitters of electronic signals;
distance measuring apparatus; transmitting sets
[telecommunication]; integrated circuit cards [smart cards];
sound reproduction apparatus; wearable activity trackers;
whistle alarms; electronic collars to train animals; dog
whistles. Presentation of goods on communication media, for retail
purposes; commercial administration of the licensing of the
goods and services of others; sales promotion for others;
providing commercial information and advice for consumers in
the choice of products and services; providing commercial
and business contact information; marketing in the framework
of software publishing; demonstration of goods; organization
of exhibitions for commercial or advertising purposes;
provision of an online marketplace for buyers and sellers of
goods and services. Providing access to databases; news agency services;
transmission of digital files; rental of access time to
global computer networks; providing user access to global
computer networks; providing internet chatrooms; providing
telecommunications connections to a global computer network;
providing information in the field of telecommunications;
telecommunications routing and junction services. Computer software design; monitoring of computer systems to
detect breakdowns; monitoring of computer systems for
detecting unauthorized access or data breach; electronic
monitoring of credit card activity to detect fraud via the
internet; electronic monitoring of personally identifying
information to detect identity theft via the internet;
updating of computer software; computer virus protection
services; research in the field of artificial intelligence
technology; research and development of new products for
others; scientific and technological research relating to
patent mapping; scientific research; technological research;
artificial intelligence consultancy; website design
consultancy; computer technology consultancy; consultancy in
the design and development of computer hardware; computer
software consultancy; technological consultancy; software as
a service [SaaS]; platform as a service [PaaS]; computer
system design; providing virtual computer systems through
cloud computing; providing information relating to computer
technology and programming via a website; provision of a
cloud-ready software platform [PaaS]; providing search
engines for the internet; conversion of computer programs
and data, other than physical conversion; development of
computer platforms; design of computer-simulated models;
computer programming services for data processing;
technological consultancy services for digital
transformation; installation of computer software; rental of
application software; maintenance of software; creating and
maintaining websites for others; computer programming;
off-site data backup; telecommunications technology
consultancy; user authentication services using single
sign-on technology for online software applications; user
authentication services using technology for e-commerce
transactions; data encryption services; electronic data
storage; writing of computer code; computer security
consultancy; internet security consultancy;
telecommunication network security consultancy; data
security consultancy; providing online non-downloadable
computer software; computer graphic design for video
projection mapping; software engineering services for data
processing; hosting computer websites; rental of web
servers; computer rental; cloud computing; computer
technology services provided on an outsourcing basis;
monitoring of computer system operation by remote access.
2.
SYSTEM AND METHOD FOR GENERATING A SIGNATURE OF A SPAM MESSAGE BASED ON CLUSTERING
A method for generating a signature of a spam message includes determining one or more classification attributes and one or more clustering attributes contained in successively intercepted first and second electronic messages. The first electronic message is classified using a trained classification model for classifying electronic messages based on the one or more classification attributes. The first electronic message is classified as spam if a degree of similarity of the first electronic message to one or more spam messages is greater than a predetermined value. A determination is made whether the first electronic message and the second electronic message belong to a single cluster based on the determined one or more clustering attributes. A signature of a spam message is generated based on the the identified single cluster of electronic messages.
G06F 18/2413 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
G06F 18/2415 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
A method for building a security monitor includes identifying one or more objects of a microkernel Operating System (OS) participating in transmission of an Inter Process Communication (IPC) message. The one or more OS objects include one or more processes and/or one or more applications executed by the microkernel OS. One or more security policies associated with the identified microkernel OS objects are selected from a security policy database. A policy verification module is configured based on the selected security policies to generate a decision related to controlling the transmission of the IPC message. A security monitor is generated using the configured policy verification module to control the transmission of the message based on the decision generated by the policy verification module.
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
Disclosed herein are systems and methods for security monitoring and incident response using large language models. In one aspect, an exemplary method comprises: receiving input data from elements of Security Operations Center (SOC), generating and sending a query based on the received input data to a Large Language Model (LLM), parsing a response received from the LLM, and performing analysis to determine whether a threat has been identified. In one aspect, the method further comprises: when a threat is identified, collecting artifacts of the threat, and analyzing the threat further with involvement of security professionals, when a threat is not identified, determining whether additional data is needed, and when additional data is needed, determining a type of the additional data, when the type of additional data that is determined, collecting additional information from elements of the SOC, and when additional data is not needed, terminating the incident response.
Disclosed herein are systems and methods for anti-virus scanning of objects on a mobile device. In one aspect, an exemplary method comprises: receiving, by a security module, a command from a protection module of a third-party application to perform an anti-virus scan, when a mobile security application is installed or pre-installed, when the mobile security application is not activated, activating the mobile security application, when the mobile security application is not installed or pre-installed on the mobile device, installing and activating the mobile security application, transmitting the object to the mobile security application, performing an anti-virus scan of the object to determine whether the object is malicious, transmitting results of the anti-virus scan to a protection module of a third-party application, selecting at least one response measure based on the result of the anti-virus scan, and applying at least one selected response measure.
Disclosed herein are systems and methods for classifying objects to prevent the spread of malicious activity. In one aspect, an exemplary method comprises: searching for objects in a network that have generic information with other objects and collecting information about the objects, generating a graph of associations containing classified and unclassified objects in a form of vertices, whereby an association between objects indicates a presence of generic information between the objects, wherein the classified objects comprise malicious objects, extracting from the generated graph of associations at least one subgraph comprising homogeneous objects and containing at least one unclassified object based on at least one of the following: an analysis of the group association between objects; and an analysis of sequential association between objects, classifying each unclassified object in each subgraph based on the analysis using classification rules, and restricting access to an object that is classified as malicious.
Disclosed herein are systems and methods for creating a classifier for detecting phishing sites using Document Object Model (DOM) hashes. In one aspect, an exemplary method comprises: parsing each page of the website, wherein the parsing includes at least generating a DOM tree of the page, for each page, generating at least one string of DOM tree elements according to predetermined patterns, creating a first hash based on the string, creating a second hash for the page, generating a first dataset comprising hashes of safe pages and a second dataset comprising hashes of phishing pages, analyzing the first and second datasets to determine whether there is diversity of data in each dataset, generating a training sample from the datasets when there is diversity of data, and training a classifier of a machine learning model based on the training sample generated from the first and second datasets.
Disclosed herein are systems and methods for enhancing the security of isolated execution environments of an authorized user. In one aspect, an exemplary method comprises: identifying at least one computer system on which a user is authorized, forming an isolated execution environment for execution of a security application, detecting at least two isolated execution environments using an isolated execution environment of the installed security application on the identified computer system, and forming a secure integration of the identified isolated execution environments using integration rules. In one aspect, the forming of the secured integration is performed by: creating an integration of the identified isolated execution environments, and checking for presence of a data access transit in the created integration. In one aspect, when the data access transit is identified, the method further comprises applying restrictions based on identified options for the identified data access transit using integration rules.
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
Disclosed herein are systems and methods for detection of anomalies in a cyber-physical system in real-time. In one aspect, an exemplary method comprises: obtaining, in real-time, randomly distributed stream of observations of CPS parameters; converting an observation of the CPS parameter to a uniform temporal grid (UTG); when at least a criterion for unloading at least one UTG node of the converted observations is satisfied, unloading the UTG nodes corresponding to the satisfied criterion; for each unloaded UTG node, calculating a value of each output CPS parameter of a set of output CPS parameters; and detecting an anomaly in the CPS based on the values of the output CPS parameters.
Disclosed herein are systems and methods for providing a trained model to a computing device of a user. In one aspect, an exemplary method comprises, receiving, by a model transmitter, registration information from the computing device of the user comprising a trained model of the user's behavior, wherein the model is constructed using software provided by a service, storing, by the model transmitter, the received registration information in a database of behavior models, and during a repeat visit, by the user, to the service, updating the trained model of the user's behavior and transmitting the updated trained model to the service, wherein the updated trained model differs from a previously sent model of the user's behavior by no more than is allowed for unambiguous identification of the user on the service.
H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies or resolving scheduling conflicts
G06F 16/9535 - Search customisation based on user profiles and personalisation
Disclosed herein are systems and methods for identifying information security threats. In one aspect, an exemplary method comprises: searching a machine-readable medium of a computer for data corresponding to at least one deleted file, when data corresponding to a deleted file is found, reading at least a portion of the data into RAM, analyzing the read data for information about information security threats, and when information about information security threats is detected, generating notification. In another aspect, the method comprises: searching for data corresponding to at least one deleted file, when data corresponding to the deleted file is found, checking for a possibility of analyzing the data, when conditions of analysis are satisfied, reading at least a portion of the data into RAM, analyzing the read data for information about information security threats, and when information about information security threats is detected, generating notification about the detected information security threat.
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
G06F 21/54 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by adding security routines or objects to programs
12.
SYSTEM AND METHOD FOR CLASSIFYING INCOMING EVENTS BY USER'S MOBILE DEVICE BASED ON USER PREFERENCES
Disclosed herein are methods and systems for classifying incoming events by user's mobile device based on user preferences. In one aspect, an exemplary method comprises: intercepting an incoming event received by a mobile device, analyzing content of the intercepted event to determine one or more attributes of the intercepted event, comparing the intercepted event to a plurality of previously collected and classified events, stored in an event repository, based on the one or more determined attributes to identify one or more similar events, determining a rating value of the one or more similar events based on a matrix of user preferences, wherein the rating value indicates probability that the corresponding event belongs to a particular class of events, and classifying the intercepted event as undesirable on the mobile device if the rating value of the one or more similar events is less than a predetermined threshold value.
Disclosed herein are systems and methods for filtering events for transmission to a remote device. In one aspect, an exemplary method comprises, collecting events and identifying, for each event of the collected events, a type the collected events belong to from among a predetermined list of types of events, and determining, for each type of events that is identified, a selection coefficient that indicates a proportion of events of the type of events to be transmitted to a remote device, when a predetermined number of collected events is reached, combining the collected events into a sequence, and determining, for the sequence, a time interval for which a given number of events is collected, for each type of events, selecting events for transmission to the remote device based on the selection coefficient of the respective type of events, and transmitting the selected events to the remote device.
Disclosed herein are systems and methods for classifying calls on a remote device. In one aspect, an exemplary method comprises, collecting call data for each call, wherein each call is associated with a unique call identifier, extracting significant features from the collected call data, generating a call classification model based on the extracted significant features, wherein the call classification model comprises a set of rules based on which a predetermined call class is assigned to the call, extracting a text review from the collected call data, generating a generative review model based on the extracted text review, the generative review model used for correlating text reviews with a call class, and classifying the call for which the call data was collected based on the call classification model generated and the generative review model.
Disclosed herein are systems and methods for recognizing undesirable calls on a remote device. In one aspect, an exemplary method comprises, generating, for each call, a call identifier from a probabilistic hash received from a secure device, the probabilistic hash having been computed by the secure device based on a unique call identifier associated with call data collected for the call; analyzing the generated call identifiers to identify at least one of the generated call identifiers as a suspicious call identifier; requesting data, from the secure device associated with the suspicious call identifiers, where the requested data includes at least information about the call associated with the suspicious call identifier; and analyzing data received in response to the request and recognizing suspicious call identifier and the call associated with the suspicious call identifier as undesirable based on the analysis of the data received in response to the request.
A method creating a heuristic rule to identify Business Email Compromise (BEC) attacks includes filtering text of received email messages, using a first classifier, to extract one or more terms indicative of a BEC attack from the text of the received email messages, wherein the first classifier includes a trained recurrent neural network that includes a language model, generating, using the first classifier, one or more n-grams based on the extracted terms, wherein each of the n-grams characterizes a particular extracted term, generating, using a second classifier, a vector representation of the extracted terms based on the generated n-grams, assigning a weight coefficient to each of the extracted terms, wherein a higher weight coefficient indicates higher relevancy to BEC attack of the corresponding extracted term, and generating a heuristic rule associated with the BEC attack by combining the weight coefficients of a combination of the extracted terms.
Disclosed herein are systems and methods for detecting cyclic activity in an event stream. In one aspect, an exemplary method comprises, creating a buffer, determining a threshold for indicating a beginning of a cycle, processing each event by filling the buffer with the event, determining a number of unique events in the buffer, when the number reaches a predetermined size of the buffer, replacing one event with another by excluding the earliest event and including the new event, recalculating the number of unique events, comparing the recalculated number with a threshold for a maximum number of unique events for cycle detection, detecting a beginning of a cycle when the number of unique events is less than or equal to the maximum number of unique events for cycle detection, excluding further events from the event stream, and continuing to recalculate the number of unique events after each addition.
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Software for computers, mobile phones and mobile computers, (downloadable and recorded on magnetic and optical data media) in the field of computer security, in particular: operating systems; antivirus software; database management software for computer security. Design and software upgrades of computers, mobile phones and mobile computers, in particular: computer software design; computer software design for scanning and removing computer viruses and malicious software; computer software consultancy; rental of computer software; recovery of computer data.
19.
SYSTEM AND METHOD FOR PROVIDING SECURITY TO IOT DEVICES
Disclosed herein are systems and methods for providing security to an Internet of Things (IoT) device. An exemplary method comprises, obtaining, by an interceptor located on at least one gateway or the device, information about an interaction of the device with at least one of: other devices, service, and server; by an analysis tool located on the gateway: determining at least one category of the device and at least one category of a user of the device by interacting with a security service based on information received about the interaction of the device; receiving data from the security service, and identifying the security component to be installed on the device based on the data received from the security service, the category of the device and the category of a user of the device; and installing on the device, by the interceptor, the security component identified by the analysis tool.
Disclosed herein are systems and methods for detecting anomalies in a cyber-physical system. In one aspect, an exemplary method comprises, for a list of parameters of the CPS, collecting data containing values of the parameters of the CPS, generating at least two subsets of parameters of the CPS from the collected data, selecting at least two anomaly detectors from a list of anomaly detectors and selecting at least one corresponding subset of the parameters of the CPS for each selected anomaly detector, pre-processing each subset of the parameters of the CPS and transmitting an output of the pre-processing to the corresponding anomaly detector, for each pre-processed subset, detecting anomalies in the data using the corresponding respective anomaly detector, and detecting a combined anomaly in the CPS by combining and processing results obtained from the selected at least two anomaly detectors.
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Software for computers, mobile phones and mobile computers, (downloadable and recorded on magnetic and optical data media) in the field of computer security, in particular: operating systems; antivirus software; database management software for computer security. Design and software upgrades of computers, mobile phones and mobile computers, in particular: computer software design; computer software design for scanning and removing computer viruses and malicious software; computer software consultancy; rental of computer software; recovery of computer data.
22.
METHOD FOR IDENTIFYING PATTERNS AND ANOMALIES IN THE FLOW OF EVENTS FROM A CYBER-PHYSICAL SYSTEM
Disclosed herein are methods for identifying the structure of patterns and anomalies in flow of events from the cyber-physical system or information system. In one aspect, an exemplary method comprises, using at least one connector, getting event data, generating at least one episode consisting of a sequence of events, and transferring the generated episodes to an event processor; and using the event processor, process episodes using a neurosemantic network, wherein the processing includes recognizing events and patterns previously learned by the neurosemantic network, training the neurosemantic network, identifying a structure of patterns by mapping to the patterns of neurons on a hierarchy of layers of the neurosemantic network, attributing events and patterns corresponding to neurons of the neurosemantic network to an anomaly depending on a number of activations of the corresponding neuron, and storing the state of the neurosemantic network.
Disclosed herein are systems for identifying the structure of patterns and anomalies in flow of events from the cyber-physical system or information system. In one aspect, an exemplary method comprises, using at least one connector, getting event data, generating at least one episode consisting of a sequence of events, and transferring the generated episodes to an event processor; and using the event processor, process episodes using a neurosemantic network, wherein the processing includes recognizing events and patterns previously learned by the neurosemantic network, training the neurosemantic network, identifying a structure of patterns by mapping to the patterns of neurons on a hierarchy of layers of the neurosemantic network, attributing events and patterns corresponding to neurons of the neurosemantic network to an anomaly depending on a number of activations of the corresponding neuron, and storing the state of the neurosemantic network.
A method for detecting a vulnerability in an operating system based on process and thread data, includes the steps of: detecting one or more launches of one or more threads associated with one or more processes in an operating system (OS); generating a set of privileges based on the detected one or more launches; analyzing the generated set of privileges to identify illegitimate changes in privileges; detecting a vulnerability in the OS using one or more rules for detecting a vulnerability based on the analyzed set of privileges; and isolating a file that exploited the detected vulnerability, in response to detecting the vulnerability.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
G06F 21/51 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems at application loading time, e.g. accepting, rejecting, starting or inhibiting executable software based on integrity or source reliability
25.
System and method of a cloud server for providing content to a user
Disclosed herein are systems and methods of a cloud server for providing content to a user. In one aspect, an exemplary method comprises receiving data, from a user device, the data comprising at least one of: hash and type of intercepted search requests and site names, incrementing a value of a popularity counter of the received data, when the value of the popularity counter of the received data exceeds a predetermined threshold, sending an inquiry for the intercepted search requests and site names in plain form, and when the intercepted search requests and site names are received in plain form, performing categorization of the intercepted search requests and site names, and transmitting, to the user device, content associated with the intercepted search requests and rules for establishing a category of the content.
Disclosed are system and method for detecting anomalies in the behavior of a trusted process. An example method includes detecting a launch of a trusted process in a computer system; selecting a basic behavior model corresponding to the trusted process and a machine learning model corresponding to the trusted process from a data store; monitoring execution of the trusted process using the basic behavior model; comparing a total probability of occurrence of all of the plurality of identified events with a predefined threshold; extracting data corresponding to the identified events from a Markov chain, in response to determining that the probability of occurrence of all of the plurality of identified events is below the predefined threshold; analyzing the extracted data using the machine learning model; and generating a decision with respect to presence of anomalous behavior in the trusted process based on the analysis performed by the machine learning model.
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
G06F 21/51 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems at application loading time, e.g. accepting, rejecting, starting or inhibiting executable software based on integrity or source reliability
Disclosed herein are systems and method for spam identification. A spam filter module may receive an email at a client device and may determine a signature of the email. The spam filter module may compare the determined signature with a plurality of spam signatures stored in a database. In response to determining that no match exists between the determined signature and the plurality of spam signatures, the spam filter module may placing the email in quarantine. A spam classifier module may extract header information of the email and determine a degree of similarity between known spam emails and the email. In response to determining that the degree of similarity exceeds a threshold, the spam filter module may transfer the email from the quarantine to a spam repository.
H04L 29/06 - Communication control; Communication processing characterised by a protocol
G06F 18/22 - Matching criteria, e.g. proximity measures
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
A method for securing a plurality of IoT devices using a gateway includes intercepting, by a gateway, information about interactions between a first IoT device and at least one of: a second IoT device, a computer server, and a computer service. One or more cyber security threats are detected by the gateway based on the intercepted information and based on information stored in at least one of a first database and a second database. The first database is configured to store information about IoT devices and the second database is configured to store information about cyber security threats. One or more cyber security threat mitigation actions are identified by the gateway to address the detected one or more cyber security threats. The identified one or more cyber security threat mitigation actions are performed by the gateway.
Disclosed are system and method for detecting small-sized objects based on image analysis using an unmanned aerial vehicle (UAV). The method includes obtaining object search parameters, wherein the search parameters include at least one characteristic of an object of interest; generating, during a flight of the UAV, at least one image containing a high-resolution image; analyzing the generated image using a machine learning algorithm based on the obtained search parameters; recognizing the object of interest using a machine learning algorithm if at least one object fulfilling the search parameters is detected in the image during the analysis; and determining the location of the detected object, in response to recognizing the object as the object of interest.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
30.
System and method for detecting a harmful script based on a set of hash codes
Disclosed herein are systems and methods for detecting harmful scripts. In one aspect, an exemplary method comprises, identifying a file containing a script, wherein the identification of the file is performed by analyzing each file of a plurality of files for a presence of a harmful script, generating a summary of the script based on the identified file, calculating static and dynamic parameters of the generated summary of the script, recognizing a script programming language based on the calculated static parameters and dynamic parameters of the generated summary of the script using at least one language recognition rule, processing the identified file based on the data about the recognized script programming language, generating a set of hash codes based on a processed file using rules for generating hash codes, and detecting the harmful script when the generated set of hash codes is similar to known harmful sets of hash codes.
A method for determination of anomalies in a cyber-physical system (CPS) includes generating one or more diagnostic rules configured to calculate at least one auxiliary CPS variable. One or more values of the at least one auxiliary CPS variable are calculated for a predefined output interval of time based on collected values of a group of primary CPS variables for a predefined input interval of time based on the generated diagnostic rule. An anomaly is determined based on the collected values of the group of primary CPS variables and the one or more calculated values of the at least one auxiliary CPS variable.
A method for detecting a harmful file includes detecting activity of a driver in an operating system by intercepting an Application Programming Interface (API) request from the driver to an application. The detected activity of the driver is analyzed to determine if the driver is dangerous. A search for a file that is linked to the application and that uses the driver is performed, in response to determining that the driver is dangerous. The file found by the search is declared to be harmful.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
33.
SYSTEM AND METHOD FOR INTERRUPTING AN INCOMING UNWANTED CALL ON A MOBILE DEVICE
A method of interrupting an incoming call on a mobile device includes: intercepting an incoming telephone call received by a mobile device; determining one or more parameters of the intercepted telephone call; determining if the intercepted telephone call matches one or more telephone calls associated with a list of prohibited phone numbers by comparing the determined parameters of the intercepted call with parameters of the one or more telephone calls associated with the list of prohibited phone numbers; and in response to determining a match between the intercepted telephone call and the one or more telephone calls associated with the list of prohibited phone numbers: blocking reception of the intercepted telephone call; identifying a calling party associated with the intercepted telephone call; sending an authentication request to the identified calling party; and interrupting the intercepted telephone call in response to unsuccessful authentication.
Disclosed herein are systems and methods for training a model to identify a user to a predetermined degree of reliability. In one aspect, an exemplary method comprises, parameterizing gathered data on behavior of a user in a form of a first vector, deriving a second vector from the first vector by removing noise and low-priority information from the first vector, providing the second vector to a training algorithm, and generating a trained model for the user, the generated trained model being different for each user such that only the trained model generated for the user satisfies the predetermined degree of reliability.
H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies or resolving scheduling conflicts
G06F 16/9535 - Search customisation based on user profiles and personalisation
A method for diagnostics and monitoring of anomalies in a cyber-physical system (CPS) includes obtaining information related to anomalies identified in the CPS. The obtained information includes at least one value of one or more CPS variables. One or more classifying features of the identified anomalies in the CPS are generated based on the obtained information. Classification of the identified anomalies in the CPS into two or more anomaly classes is performed based on the generated classifying features. Each of the two or more anomaly classes is associated with one or more anomaly characteristics. Diagnostics of anomalies are performed in each of the two or more anomaly classes by calculating values of the anomaly characteristics associated with each of the two or more anomaly classes. Anomalies of each of the two or more anomaly classes are monitored based on the calculated values of the anomaly characteristics associated with each of the two or more anomaly classes.
A method for restricting reception of e-mail messages from a sender of bulk spam mail includes identifying an unknown sender of received e-mail messages. A set of e-mail messages received from the identified sender is selected. A type of bulk spam mailing is determined based on the selected set of e-mail messages using one or more spam identification signatures. Restrictions on reception of e-mail messages from a sender distributing bulk spam of the determined type are generated.
Disclosed herein are systems and methods for providing content to a user. In one aspect, an exemplary method comprises intercepting a search request and a site-name in a browser, and sending to a content-provision tool, the intercepted search request and site name, computing a hash of the intercepted search request and site-name, determining a type of the intercepted search request and site name, and transmitting the computed hash and the type of intercepted search request and site-name to a cloud server, transmitting the intercepted request and site-name to the cloud server in plain form, receiving, from the cloud server, content based on a categorization of the intercepted request and site-name and rules for establishing a category of the content, and when the rules are executed, displaying to the user, the content on the computing device of the user in accordance with a category established based on the rules.
Disclosed herein are systems and methods for identifying a phishing email message. In one aspect, an exemplary method comprises, identifying an email message as a suspicious email message by applying a first machine learning model, identifying the suspicious email message as a phishing message by applying a second machine learning model, and taking an action to provide information security against the identified phishing message. In one aspect, the first machine learning model is pre-trained on first attributes comprising values of Message_ID header, X-mail headers, or sequences of values of headers. In one aspect, the second machine learning model is pre-trained on second attributes comprising attributes related to at least one of: reputation of links, categories of email messages, flag indicating domains of blocked or known senders, a degree of similarity of the domain with those of known senders, flags indicating HTML code or script in the body of the email.
Disclosed herein are systems and methods for installing a personalized application on a mobile device. In one aspect, an exemplary method comprises, identifying an application distribution source by analyzing settings of an operating system of the mobile device that were changed as a result of obtaining an application from the application distribution source, selecting resources for the application that correspond to the identified application distribution source when a resource database from which the selection is being performed contains at least one resource corresponding to the identified application distribution source, creating the personalized application by reconfiguring the application obtained from the application distribution source based on the selected resources, and installing, on the mobile device, the created personalized application.
A method for transferring data from a first network to a second network using a gateway includes setting, by a security monitor, a state of the gateway to a first state indicating to a destination agent that access is granted to trusted memory and denied to the second network and untrusted memory. The destination agent is configured, while the gateway is in the first state, based on parameters stored in the trusted memory, to transfer data received from a source agent to the second network. The state of the gateway is changed to a second state indicating to the destination agent that access is denied to the trusted memory and granted to the second network and the untrusted memory. Transfer of the data from the source agent of the first network to the destination agent of the second network is controlled, while the gateway is in the second state.
Disclosed herein are systems and methods for modifying execution environments of applications. In one aspect, an exemplary method comprises, identifying an application that requires an isolated execution environment in order to be analyzed, generating an isolated execution environment to launch the identified application using constraint generating rules from a rules database, launching the application in the isolated execution environment that was generated, when an incorrect execution of the application is detected after the application is launched in the isolated execution environment, stopping the execution of the application and modifying the isolated execution environment using the constraint generating rules from the rule database, and when an incorrect execution of the application is not detected after the application is launched in the isolated execution environment, checking for a presence of a malicious code in the application running in the modified isolated execution environment.
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
42.
System and method for monitoring delivery of messages passed between processes from different operating systems
Disclosed herein are systems and methods for monitoring delivery of messages passed between processes from different operating systems. In one aspect, an exemplary method comprises, creating a proxy process in a first Operating System (OS) for a second process, wherein the second process is from a second OS, the first and second OS being installed in respective computing environments, assigning at least one security policy to the created proxy process for monitoring delivery of messages associated with the created proxy process, where the messages are transmitted through a programming interface of the created proxy process corresponding to a programming interface of the second process, generating a security monitor for the first OS based on the created proxy process and security policies of the first OS, and monitoring the delivery of messages between at least a first process in the first OS and the second process based on the security policies.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
G06F 9/455 - EmulationInterpretationSoftware simulation, e.g. virtualisation or emulation of application or operating system execution engines
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
43.
System and method for protecting subscriber data in the event of unwanted calls
A method for protecting subscriber data includes intercepting network traffic associated with a call. The network traffic includes call parameters and call stream data. A first set of the call parameters is analyzed. A first probability value of the call being declared as unwanted is determined. The call stream data is analyzed to define a second set of call parameters. The first set of call parameters is reanalyzed based on the second set. A second probability value of the call being declared as unwanted is determined. A determination is made if the second probability value exceeds a second threshold value. The call is declared as unwanted, in response to determining that the second probability value exceeds the second threshold. The first and second sets of call parameters are transmitted to an application configured to protect data of a protected subscriber.
A method for building a security monitor includes identifying one or more objects of a microkernel Operating System (OS) participating in transmission of an Inter Process Communication (IPC) message. The one or more OS objects include one or more processes and/or one or more applications executed by the microkernel OS. One or more security policies associated with the identified microkernel OS objects are selected from a security policy database. A policy verification module is configured based on the selected security policies to generate a decision related to controlling the transmission of the IPC message. A security monitor is generated using the configured policy verification module to control the transmission of the message based on the decision generated by the policy verification module.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
H04L 29/06 - Communication control; Communication processing characterised by a protocol
45.
System and method for detecting potentially malicious changes in applications
Disclosed herein are systems and methods for detecting potentially malicious changes in an application. In one aspect, an exemplary method comprises, selecting a first file to be analyzed and at least one second file similar to the first file, for each of the at least one second file, calculating at least one set of features, identifying a set of distinguishing features of the first file by finding, for each of the at least one second file, a difference between a set of features of the first file and the calculated at least one set of features of the second file, and detecting a presence of potentially malicious changes in the identified set of distinguishing features of the first file.
Systems and methods for verifying the integrity of a software installation image before installing the software. Security of the software installation process is ensured by providing access to the software image from a security monitor using security policies. An installation system for protecting the installation of a software image includes instructions that, when executing on computing hardware, cause the computing hardware to implement: a verifier engine to verify the integrity of the software image, a security monitor engine to set an initial access state for the software image granting access to the verifier engine and to update the access state for the software image in accordance with at least one security policy, and an installer engine to install software contained in the software image according to the access state.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
47.
System and method for assessing an impact of malicious software causing a denial of service of components of industrial automation and control systems
Disclosed herein are systems and methods for assessing an impact of malicious software causing a denial of service of components of industrial automation and control systems (IACS). In one aspect, an exemplary method comprises, generating a configuration of the IACS on a testing device based on specifications, obtaining a set of investigated software, where the set includes at least one sample of one malicious software, testing the generated configuration using the received set of investigated software, identifying occurrences of denials of service of the components of the testing device which are used to simulate the generated configuration, determining an impact of the malicious software on the generated configuration, and a degree of degradation of a performance of the generated configuration of IACS, and pronouncing a verdict as to a danger of the malicious software for the generated configuration of IACS based on the determined impact of the malicious software.
G06F 11/36 - Prevention of errors by analysis, debugging or testing of software
G05B 19/406 - Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by monitoring or safety
48.
System and method for handling unwanted telephone calls through a branching node
Disclosed herein are systems and methods for handling unwanted telephone calls through a branching node. In one aspect, an exemplary method comprises, intercepting a call request from a terminal device of a calling party to a terminal device of a called party, establishing a connection through the branching node via two different communication channels, a first communication channel being with the terminal device of the called party and a second communication channel being with a call recorder; duplicating media data between the terminal devices such that one data stream is directed towards a receiving device of the media data and a second data stream is directed towards the call recorder; recording and sending the recorded call to an automatic speech recognizer for converting the media file to digital information suitable for analysis; and when the call is unwanted, handling the call based on classification of the call.
G10L 25/30 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the analysis technique using neural networks
H04M 3/436 - Arrangements for screening incoming calls
G06N 20/10 - Machine learning using kernel methods, e.g. support vector machines [SVM]
G10L 15/14 - Speech classification or search using statistical models, e.g. Hidden Markov Models [HMM]
H04M 3/22 - Arrangements for supervision, monitoring or testing
H04M 3/53 - Centralised arrangements for recording incoming messages
49.
System and method for processing personal data by application of policies
Disclosed herein are systems and methods for processing personal data by application of policies. In one aspect, an exemplary method comprises, by the network infrastructure component, analyzing communication protocols between an IoT device and the network infrastructure component, identifying at least one field that contains personal data, for each identified field, analyzing the identified field using personal data processing policies uploaded to the network infrastructure component, and applying the personal data policies for enforcement.
Disclosed herein are systems and methods for controlling an IoT device from a node (hub) in a network infrastructure. In one aspect, an exemplary method comprises, analyzing the IoT device based on at least one of: characteristics of functionalitites of the IoT device, characteristics of information security of the IoT device, and characteristics of an impact on human life by the IoT device and/or by the security of the IoT device, adjusting the IoT device based on results of the analysis, determining whether the characteristics for which the analysis was performed changed during an operation of the device, and when the characteristics for which the analysis was performed changed, changing one or more settings associated with the IoT device based on the changes determined during the operation of the device.
Disclosed herein are systems and methods for clustering email messages identified as spam using a trained classifier. In one aspect, an exemplary method comprises, selecting at least two characteristics from each received email message, for each received email message, using a classifier containing a neural network, determining whether or not the email message is a spam based on the at least two characteristics of the email message, for each email message determined as being a spam email, calculating a feature vector, the feature vector being calculated at a final hidden layer of the neural network, and generating one or more clusters of the email messages identified as spam based on similarities of the feature vectors calculated at the final hidden layer of the neural network.
A method for generating a signature of a spam message includes determining one or more classification attributes and one or more clustering attributes contained in successively intercepted first and second electronic messages. The first electronic message is classified using a trained classification model for classifying electronic messages based on the one or more classification attributes. The first electronic message is classified as spam if a degree of similarity of the first electronic message to one or more spam messages is greater than a predetermined value. A determination is made whether the first electronic message and the second electronic message belong to a single cluster based on the determined one or more clustering attributes. A signature of a spam message is generated based on the the identified single cluster of electronic messages.
G06F 18/2413 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
G06F 18/2415 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
Systems and methods for building systems of honeypot resources for the detection of malicious objects in network traffic. A system includes at least two gathering tools for gathering data about the computer system on which it is installed, a building tool configured for building at least two virtual environments, each including an emulation tool configured for emulating the operation of the computer system in the virtual environment, and a distribution tool configured for selecting at least one virtual environment for each computer system and for establishing connection between the computer system and the virtual environment.
Disclosed herein are systems and methods for configuring IoT devices from the network infrastructure component based on a type of network, wherein the network contains at least one IoT device. In one aspect, an exemplary method comprises, by the network infrastructure component, collecting, data on one or more IoT devices, wherein each of the one or more IoT devices is connected to the network infrastructure component; for each IoT device, identifying a type of network; defining policies for configuring each of the one or more IoT devices based on the identified network; and for each of the one or more IoT devices, applying policies for monitoring and configuring the IoT device.
H04L 67/12 - Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
Systems and methods for managing malicious code detection rules. Systems and methods ensure information security by maintaining malicious code detection rules including through detection of one or more errors and modification of the malicious code detection rule. An anti-virus tool is configured to detect malicious code for an object under analysis based on a malicious code detection rule, a gathering tool is configured to gather use data about the malicious code detection rule, a detection tool is configured to determine whether an error is present based on an error detection rule, and a modification tool is configured to change the malicious code detection rule.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
56.
System and method for using weighting factor values of inventory rules to efficiently identify devices of a computer network
A method for using inventory rules to identify devices of a computer network includes intercepting data traffic across one or more communication links of the computer network. The intercepted data traffic is analyzed to determine whether one or more of a plurality of inventory rules is satisfied by the intercepted data traffic. Each of the plurality of inventory rules comprises one or more conditions indicating the presence of a particular computer network device having a set of parameters. Each one of the plurality of inventory rules has a weighting factor value indicative of a priority of the application of a corresponding rule. The weighting factor value depends on previously identified devices. One or more devices of the computer network are identified using the weighting factor value of the one or more satisfied inventory rules.
H04L 41/0853 - Retrieval of network configurationTracking network configuration history by actively collecting configuration information or by backing up configuration information
A method for creating a heuristic rule to identify Business Email Compromise (BEC) attacks includes filtering text of received email messages, using a first classifier, to extract one or more terms indicative of a BEC attack from the text of the received email messages. One or more n-grams are generated, using the first classifier, based on the extracted terms. A vector representation of the extracted terms is generated, using a second classifier, based on the generated one or more n-grams. The second classifier includes a logit model. A weight coefficient is assigned to each of the one or more extracted terms based on an output of the trained logit model. A higher weight coefficient indicates higher relevancy to BEC attack of the corresponding term. A heuristic rule associated with the BEC attack is generated by combining the weight coefficients of a combination of the one or more extracted terms.
Disclosed herein are systems and methods for providing a security policy for an electronic control unit (ECU) implementing an Autosar Adaptive Platform (AAP) standard. In one aspect, an exemplary method comprises maintaining a list of allowed interactions, the allowed interactions being between control applications and a basic component, the basic component including at least a program element defined by the AAP standard. In one aspect, when a request for a verdict as to whether or not access for an interaction of a first control application with the basic component is received from an operating system (OS) kernel, the method comprises performing a search in the list of allowed interactions, and when the interaction for which the request is received is found in the list, the method comprises providing a verdict to the OS kernel allowing the interaction.
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
B60R 16/023 - Electric or fluid circuits specially adapted for vehicles and not otherwise provided forArrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric for transmission of signals between vehicle parts or subsystems
59.
System and method of blocking advertising on computing devices based on estimated probability
Disclosed herein are systems and methods for blocking information from being received on a computing device. In one aspect, an exemplary method comprises, by a hardware processor, intercepting a Domain Name System (DNS) request, the intercepted DNS request being initiated by an advertising module of the computing device; obtaining a set of rules for a transmission of the intercepted DNS request; estimating a probability of the intercepted DNS request being a DNS request that was initiated by one or more actions of a user based on the obtained set of rules; and blocking displaying the advertisement information on the computing device based on the estimated probability, wherein the blocking displaying the advertisement information comprises blocking the advertisement information from being received on the computing device.
Disclosed herein are systems and methods for categorizing an application on a computing device including gathering a set of attributes of an application. The set of attributes of the application includes at least one of: a number of files in an application package of the application; a number of executable files in the application package; numbers and types of permissions being requested; a number of classes in the executable files in the application package; and a number of methods in the executable files in the application package. sending the gathered set of attributes to a trained classification model. The application is classified, using the classification model, based on the gathered set of attributes by generating one or more probabilities of the application belonging to respective one or more categories of applications. A category of the application is determined based on the generated one or more probabilities.
A method for detecting a false positive outcome in classification of files includes, analyzing a file to determine whether or not the file is to be recognized as being malicious, analyzing a file to determine whether a digital signature certificate is present for the file, in response to recognizing the file as being malicious; comparing the digital certificate of the file with one or more digital certificates stored in a database of trusted files, in response to determining that the digital signature certificate is present for the file; and detecting a false positive outcome if the digital certificate of the file is found in the database of trusted files, when the false positive outcome is detected, excluding the file from further determination of whether the file is malicious and calculating a flexible hash value of the file.
Disclosed herein are systems and methods for handling unwanted telephone calls. In one aspect, an exemplary method comprises, intercepting a call request for a call from a terminal device of a calling party to a terminal device of a called party, generating a call recording containing media data transmitted within a connection established by the intercepted call request, determining attributes of the generated call recording, classifying the call as an unwanted call based on the determined attributes, wherein the classification is performed by a classifier trained on previously collected unwanted calls, and wherein the call is classified as unwanted when the attributes belong to an unwanted call class that is known, and handling the call in accordance with the classification of the call, the handling including at least securing information of the call.
Disclosed herein are systems and methods for granting a user data processor access to a cryptocontainer of user data. In one aspect, an exemplary method comprises, creating a cryptocontainer for user's data, wherein the cryptocontainer receives at least one element of the user's data and encrypts the element; for the user data processor, establishing rights for accessing the element using a first key, and forming at least one access structure, the forming including, placing the first key in the access structure based on the established rights, receiving, from the user data processor, a second key linked to the user data processor which is to be used for accessing the first key, and encrypting the first key with the second key; and when a request for access to the cryptocontainer is received, granting, to the user data processor, access to the cryptocontainer based on the formed at least one access structure.
Disclosed herein are systems and methods for granting access to data of a user. In one aspect, an exemplary method comprises, blocking the processing of data of a user, transferring the data of the user to a storage device, receiving a request for data processing from a collected data processor of a device, redirecting the received request to the storage device, determining, by the storage device, data access rights for the collected data processor of the device from which the request for data processing is received in accordance with data access rights established by a data access rights manager, and providing access to the data in accordance with the determined data access rights.
The present disclosure provides systems and methods for increasing the cybersecurity of a control subject of an industrial technological system. In an exemplary aspect, the method comprises installing a protected Operating System (OS) on a control subject of the industrial technological system, receiving, by the protected OS, a plurality of log files from the control subject, analyzing, by the protected OS, the plurality of log files to determine if a suspicious action has been applied to the control subject, wherein the control subject is configured to apply a controlling action to the object of control, intercepting, by the protected OS, network packets transmitted by an application launched in a guest OS to the control subject, and preventing, by the protected OS, an interaction between the application and the control subject, in response to determining that the suspicious action has been applied to the control subject.
G06F 21/00 - Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
G06F 21/55 - Detecting local intrusion or implementing counter-measures
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
66.
System and method for detecting malicious use of a remote administration tool
Disclosed herein are systems and methods for detecting malicious use of a remote administration tool. In one aspect, an exemplary method comprises, gathering, from a flow of events, data that comprises any number of keyboard entry events, wherein each event is related at least to actions indicating a keyboard entry and a context in which the event occurred, comparing the gathered keyboard entry events with signatures from a database, and when a match is found with at least one signature, identifying an activity which is a characteristic that indicates that the remote administration tool is being controlled remotely.
G06F 21/00 - Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
G06F 3/023 - Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
G06Q 20/10 - Payment architectures specially adapted for electronic funds transfer [EFT] systemsPayment architectures specially adapted for home banking systems
G06Q 20/40 - Authorisation, e.g. identification of payer or payee, verification of customer or shop credentialsReview and approval of payers, e.g. check of credit lines or negative lists
G06Q 40/02 - Banking, e.g. interest calculation or account maintenance
Disclosed herein are systems and methods for identifying a cryptor that encodes files of a computer system. An exemplary method comprises, identifying one or more files into which a data entry is performed by a suspect process; for each identified file, determining characteristics of the identified file, identifying classes of file modifications using a trained machine learning model and respective characteristics of the identified file, identifying a suspect process as being associated with the cryptor based on the identified classes of file modification of the file, and protecting the computer system from the cryptor.
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
G06F 18/2413 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
G06F 21/54 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by adding security routines or objects to programs
G06F 21/55 - Detecting local intrusion or implementing counter-measures
Disclosed herein are systems and methods for protecting a user's devices based on types of anomalies. In one aspect, an exemplary method comprises, determining, by a feature determiner, one or more values of features of a user's activity performed using at least one of the user's devices, detecting, by an anomaly detector, anomalies indicative of at least one threat to information security of the user's devices based on the one or more values of the features, for each detected anomaly, identifying, by the anomaly detector, a type of the anomaly and at least one device that is a source of the anomaly, wherein the type of anomaly is identified using an anomaly classifier and one or more values of features, and for each user's device, modifying, by a device protector, one or more information security settings of the user's device based on the identified type of the anomaly.
Disclosed herein are methods and systems for selecting a detection model for detection of a malicious file. An exemplary method includes: monitoring a file during execution of the file within a computer system by intercepting commands of the file being executed and determining one or more parameters of the intercepted commands. A behavior log of the file being executed containing behavioral data is formed based on the intercepted commands and based on the one or more parameters of the intercepted commands. The behavior log is analyzed to form a feature vector. The feature vector characterizes the behavioral data. One or more detection models are selected from a database of detection models based on the feature vector. Each of the one or more detection models includes a decision-making rule for determining a degree of maliciousness of the file being executed.
G06F 21/52 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure
G06F 21/55 - Detecting local intrusion or implementing counter-measures
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
Systems and methods for detecting malicious activity in a computer system. One or more graphs can be generated based on information objects about the computer system and relationships between the information objects, where the information objects are vertices in the graphs and the relationships are edges in the graphs. Comparison of generated graphs to existing graphs can determine a likelihood of malicious activity.
Disclosed herein are systems and methods for generating heuristic rules for identifying spam emails based on fields in headers of emails. In one aspect, an exemplary method comprises, collecting statistical data on contents of a plurality of emails; analyzing the statistical data to identify different types of content, including headers or hyperlinks in said emails; grouping the emails into clusters based on types of content identified in said emails, wherein at least one cluster group being based on fields in headers of said emails; generating a hash from the most frequent combination of group of data in each cluster; formulating regular expressions based on analysis of hyperlinks of emails corresponding to the generated hashes; and generating heuristic rule for identifying spam emails by combining the hashes and the corresponding regular expressions, wherein the hash is generated based on fields in the headers of said emails.
A method for processing information security events of a computer system includes receiving information related to a plurality of information security events occurred in the computer system. Each of the events includes an event related to a possible violation of information security of the computer system. A verdict is determined for each of the events. The verdict includes: i) information security incident or ii) false positive. The verdict is false positive if the probability of a false positive for the corresponding event is greater than a first threshold. Verdicts are changed for a subset of the events from the false positive to the information security incident. A number of events in the subset is lower than a second threshold. An analysis of the events having a verdict of the information security incident is performed to determine if the computer system is under a cyberattack.
Disclosed herein are systems and methods for detecting an unapproved use of a computing device of a user. In one aspect, an exemplary method comprises, by a security application: detecting a script executing in a browser on the computing device of the user, intercepting messages being exchanged during an interaction of the script with a server, wherein the intercepted messages comprise at least one of messages sent from the script to the server and from the server to the script, analyzing the intercepted messages to determine whether or not attributes of an unapproved use of resources of the computing device of the user are present, detecting the unapproved use of the resources of the computing device of the user when at least one of said attributes is detected.
A method for classifying incoming events includes intercepting an incoming event received by a mobile device. The content of the intercepted event is analyzed to determine one or more attributes of the intercepted event. The intercepted event is compared to a plurality of previously collected and classified events, stored in an event repository, based on the one or more determined attributes to identify one or more similar events. A rating of each of the one or more similar events is determined. The rating characterizes probability that the corresponding event belongs to a particular class. The intercepted event is classified as undesirable on the mobile device if the rating value of the one or more similar events is less than a predetermined threshold value.
An example of a method for detecting hacking activities includes identifying one or more attributes of each interaction in a sequence of interactions between one or more users and bank services during a predetermined time period. The one or more users are categorized into a plurality of groups based on the identified attributes. Each of the plurality of groups includes users performing the sequence of interactions with the bank services during the predetermined time period. A degree of anomaly is calculated for each of the plurality of groups based on a total number of users associated with a corresponding sequence of interactions and based on a number of users associated with the corresponding sequence of interactions during the predetermined time period. The calculated degree of anomaly is compared with a predetermined threshold. Hacking activity is identified, in response to determining that the calculated degree of anomaly exceeds the predetermined threshold.
G06Q 20/10 - Payment architectures specially adapted for electronic funds transfer [EFT] systemsPayment architectures specially adapted for home banking systems
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
A method for emulating execution of a file includes emulating execution of the instructions of a file on a virtual processor of an emulator. The execution of the instructions is halted in response to an invocation of an API function. A determination is made whether the invoked API function is present in the updatable modules of the emulator. The updatable modules contain implementation of API functions. In response to determining that the invoked API function is present in the updatable modules, execution of the invoked API function is emulated according to corresponding implementation contained in the updatable modules. Otherwise, result of execution of the invoked API function is generated by executing a corresponding virtual API function on a processor of a computing device.
G06F 21/00 - Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
G06F 9/455 - EmulationInterpretationSoftware simulation, e.g. virtualisation or emulation of application or operating system execution engines
G06F 21/52 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure
77.
System and method of selection of a model to describe a user
Disclosed herein are systems and methods for selection of a model to describe a user. In one aspect, an exemplary method comprises, creating data on preferences of the user based on previously gathered data on usage of a computing device by the user and a base model that describes the user, wherein the base model is previously selected from a database of models including a plurality of models, determining an accuracy of the data created on the preferences of the user, wherein the determination is based on observed behaviors of the user, when the accuracy of the data is determined as being less than a predetermined threshold value, selecting a correcting model related to the base model, and retraining the base model, and when the accuracy of the data is determined as being greater than or equal to the predetermined threshold value, selecting the base model to describe the user.
A method for providing an interprocess interaction in an electronic control unit having an operating system defining a kernel space, wherein the method involves steps in which: the kernel of the operating system intercepts a request for an interprocess communication between a first application and a second application of the electronic control unit. A verdict is requested, from an access control component of the operating system, with respect to granting access for the requested interprocess communication between the first application and the second application of the electronic control unit. The access control component generates the verdict for the requested interprocess communication based on a security policy. The kernel of the operating system selectively allows the requested interprocess communication between the first application and the second application based on the generated verdict.
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computerOutput arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
Systems and methods are provided for detecting system anomalies. The described technique includes receiving system parameters specifying functionality of a computing system. An anomaly is detected within the computing system. A recovery method is determined based on a recovery-method model and information about the detected anomaly, responsive to detecting the anomaly in the computing system. The determined recovery method is configured to ensure requirements of the computing system are met. Furthermore, responsive to detecting the anomaly in the computing system, the determined recovery method is implemented in response to installation of the selected system-compatible tool.
A method for voice call analysis and classification includes intercepting a voice call session between an initiating device and a recipient device. Voice call data exchanged between the initiating device and the recipient device during the voice call session is transformed into a predefined data format. The transformed voice call data is analyzed to determine one or more attributes of the intercepted voice call. One or more features associated with the intercepted voice call session are identified based on the determined one or more attributes. The intercepted voice call is classified using the identified one or more features.
Disclosed herein are systems and methods for anonymous sending of data from a source device to a recipient device. In one aspect, an exemplary method comprises, by the source device: receiving a request to send data to the recipient device, processing the data such that an identifier of the user and identification data are not linked to the data to be sent to the recipient, and determining whether the identifier of the user is absent in the source device, when the identifier of the user is absent, generating the identifier of the user, sending the identifier of the user to a token generator, wherein the sent identifier comprises either the generated identifier or an existing identifier found during the determination of whether the identifier is absent in the source device, and sending, to the recipient device, a combination of a random token received from the token generator and the data.
Systems and methods for assessing an impact of software on components of an industrial automation and control systems (IACS) are disclosed. In one aspect, an exemplary method comprises, selecting samples of software to be analyzed for capability to cause harm to the IACS. In one aspect, the method further comprises, for each particular configuration of the IACS being tested, performing analysis to identify effects of the selected samples on the particular configuration, wherein the identified effects include at least causes and events resulting in disruption of operations of the particular configuration of the IACS, and where the particular configuration including at least components of the industrial system being simulated on a testing device. In one aspect, the method further comprises, analyzing identified causes and events, and based on the analysis, assessing the impact of the selected sample by determining a degree of influence of the software on the particular configuration.
G05B 19/406 - Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by monitoring or safety
83.
System and method of counting votes in an electronic voting system
Disclosed herein are systems and methods for counting a ballot in an electronic voting system. In one aspect, an exemplary method comprises, generating, by a token generator of the system, a number of tokens, wherein every token unambiguously identify actions of a user during an electronic voting, when the user is identified and authenticated successfully, enabling the user to select a token from the number of tokens, activating, by a ballot activator of the system, a ballot for the user, wherein activating includes generating the ballot, unambiguously relating the token selected by the user to the ballot, and enabling the user to access the ballot, and counting, by a ballot counter of the system, the ballot filled out by the user.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
G06K 19/06 - Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
G06K 19/07 - Record carriers with conductive marks, printed circuits or semiconductor circuit elements, e.g. credit or identity cards with integrated circuit chips
An example of a method for detecting hacking activities includes categorizing a plurality of web pages of a web site providing bank services using a trained semantic model. The trained semantic model uses at least one resource identifier of a web page as an input and generates a web page category as an output. One or more attributes of an interaction between a user and bank services are identified. The one or more identified attributes are analyzed by comparing the one or more identified attributes with attributes known to belong to hacking interactions based on a corresponding web page category. Hacking activity is identified based on the results of the analysis.
A method for protecting electronics systems of a vehicle from cyberattacks includes intercepting messages transmitted on a first communications bus between a plurality of Electronic Control Units (ECUs) of a vehicle. The ECUs are communicatively coupled to the first communications bus. At least one recipient ECU that is a recipient of the intercepted messages is determined. The intercepted messages and information indicating the determined at least one recipient ECU are stored in a log. The method further includes detecting a computer attack of the vehicle based on satisfaction of at least one condition of a rule by the stored messages and information in the log and blocking the computer attack of the vehicle by performing an action associated with the rule. The rule may depend on whether one or more intercepted messages are malicious messages and a recipient ECU of the malicious messages.
Disclosed herein are systems and methods for generating individual content for a user of a service. In one aspect, an exemplary method comprises, gathering data on behavior of a user of a computing device, training a model of a user behavior based of the gathered data, wherein the trained data identifies the user to a predetermined degree of reliability, and generating an individual content for the user of the service based on a predetermined service environment in accordance with a trained model received from a model transmitter.
Disclosed herein are systems and methods for access control in an electronic control unit (ECU). In one aspect, an exemplary method comprises, by an operating system (OS) kernel of the ECU of a vehicle, intercepting at least one request for an interaction of a control application with a basic component through an interaction interface provided by the basic component for interactions with applications, requesting from a security subsystem of the operating system, a verdict as to whether or not access for the interaction of the control application with the basic component through the interaction interface can be provided, and when the verdict is received from the security subsystem granting the access, providing the interaction between the basic component and the control application through the interaction interface in accordance with the received verdict.
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
B60R 16/023 - Electric or fluid circuits specially adapted for vehicles and not otherwise provided forArrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric for transmission of signals between vehicle parts or subsystems
88.
System and method for determining a coefficient of harmfullness of a file using a trained learning model
Disclosed herein are systems and methods for determining a coefficient of harmfulness of a file using a trained learning model. In one aspect, an exemplary method includes forming a first vector containing a plurality of attributes of a known malicious file. A learning model is trained using the first vector to identify a plurality of significant attributes that influence identification of the malicious file. A second vector is formed containing a plurality of attributes of known safe files. The learning model is trained using the second vector to identify attributes insignificant to the identification of the malicious file. An unknown file is analyzed by the learning model. The learning model outputs a numerical value identifying a coefficient of harmfulness relating to a probability that the unknown file will prove to be harmful.
Disclosed are systems and methods for countering a cyber-attack on computing devices by means of which users are interacting with services, which store personal data on the users. Data is collected about the services with which the users are interacting by means of the devices, as well as data about the devices themselves. The collected data is analyzed to detect when a cyber-attack on the devices is occurring as a result of a data breach of personal data on users from the online service. A cluster of the computing devices of different users of the online service experiencing the same cyber attack is identified. Attack vectors are identified based on the characteristics of the cyber attack experienced by the computing devices in the cluster. Actions are selected for countering the cyber-attack based on the identified attack vector and are sent to the devices of all users of the corresponding cluster.
A method for detection of malicious files includes training a mapping model for mapping files in a probability space. A plurality of characteristics of an analyzed file is determined based on a set of rules. A mapping of the analyzed file in probability space is generated based on the determined plurality of characteristics. A first database is searched using the generated mapping of the analyzed file to determine whether the analyzed file is associated with a family of malicious files. The first database stores mappings associated with one or more families of malicious files. In response to determining that the analyzed file is associated with the family of malicious files, a selection of one or more methods of malware detection is made from a second database. The second database stores a plurality of malware detection methods. The selected method is used to detect the associated family.
A method for using inventory rules to identify devices of a computer network includes intercepting data traffic across one or more communication links of the computer network. The intercepted data traffic is analyzed to determine whether one or more of a plurality of inventory rules is satisfied by the intercepted data traffic. Each of the plurality of inventory rules includes one or more conditions indicating the presence of a particular computer network device having a set of parameters. Devices of the computer network are identified using one or more satisfied inventory rules.
H04L 41/0853 - Retrieval of network configurationTracking network configuration history by actively collecting configuration information or by backing up configuration information
A method for controlling secure access to user requested data includes retrieving information related to potential unauthorized access to user requested data. The information is collected by a plurality of sensors of user's mobile device. A trained statistical model representing an environment surrounding a user is generated based on the retrieved information. A first data security value is determined using the generated trained statistical model. The first data security value indicates a degree of information security based on user's environment. A second data security value is determined using the generated trained statistical model. The second data security value indicates a degree of confidentiality of the user requested data. The user requested data is filtered based on a ratio of the determined first data security value and the second data security value.
A method for detecting unmanned aerial vehicles (UAV) includes detecting an unknown flying object in a monitored zone of air space. An image of the detected unknown flying object is captured. The captured image is analyzed to classify the detected unknown flying object. A determination is made, based on the analyzed image, whether the detected unknown flying object comprises a UAV. In response to determining that the detected unknown flying object comprises a UAV, one or more radio signals exchanged between the UAV and a user of the UAV are suppressed until the UAV departs from the monitored zone of air space.
A method for detecting unmanned aerial vehicles (UAV) includes detecting an unknown flying object in a monitored zone of air space. An image of the detected unknown flying object is captured. The captured image is analyzed to classify the detected unknown flying object. A determination is made, based on the analyzed image, whether the detected unknown flying object comprises a UAV.
Disclosed herein are systems and methods for casting a vote in an electronic balloting system. In one aspect, an exemplary method comprises, authenticating a voter from whom a request for casting a vote is received, when the voter is successfully authenticated, generating an electronic ballot based on voting information, gathering data about an electronic vote of the voter, the electronic vote representing a choice of the voter on the electronic ballot, generating and sending at least one request to the voter, the request being generated for confirmation of a validity of the gathered data on the electronic vote, generating a hardcopy of the ballot filled out by the voter and placing the generated hardcopy in a centralized repository, and counting the vote, when the hardcopy of the ballot is successfully generated and an affirmative response is received from the voter in response to the at least one request.
A method for analyzing relationships between clusters of devices includes selecting a first device from a first cluster of devices and selecting a second device from a second cluster of devices. Information related to a first communication link associated with the first device and information related to a second communication link associated with the second device is obtained. A similarity metric is computed based on the obtained information. The similarity metric represents a similarity between the first communication link and the second communication link associated with the second device. A relationship between the first and second clusters is determined using the computed similarity metric. When a cyberattack is detected on the devices in the first cluster or the second cluster, protection of all devices in the first cluster and the second cluster is modified based on the determined relationship in order to defend the respective clusters from the cyberattack.
A method for defending a network of electronic devices from cyberattacks includes obtaining information about a plurality of devices and information about communication links between the plurality of devices and surrounding environment and determining types of the communication links using heuristic rules. The types of communication links are compared using corresponding link profiles. One or more similar communication links are identified based on the comparison. A cluster of devices is generated by combining a subset of the plurality of devices. The cluster includes one or more devices having one or more similar communication links. A surrounding environment profile is generated for the generated cluster of devices. When a cyberattack is detected on one of the devices in the cluster, the surrounding environment profile is modified for the cluster of devices in order to defend all devices in the cluster from the cyberattack.
Techniques are provided for generating groups of filtering rules. A priority list of filtering rules having a highest indicator of frequency of utilization among the filtering rules from the plurality of lists is determined from a plurality of lists of filtering rules. The priority list of filtering rules is transmitted to a mobile device. Each of remaining lists of filtering rules that have not been transmitted to the mobile device is divided into a plurality of parts. A plurality of groups of filtering rules is generated based on frequency of utilization within each of the remaining lists of filtering rules. Each generated group contains at most one part of each remaining list of filtering rules.
Disclosed herein are systems and methods for reducing a number of false positives in classification of files. In one aspect, an exemplary method comprises, analyzing a file to determine whether or not the file is to be recognized as being malicious, when the file is recognized as being malicious, analyzing the file to detect a false positive outcome, when the false positive outcome is detected, excluding the file from being scanned and calculating a flexible hash of the file, and storing the calculated flexible hash in a database of exceptions.
The present disclosure provides for systems and methods for generating an image of a web resource to detect a modification of the web resource. An exemplary method includes selecting one or more objects of the web resource based on one or more object attributes; identifying a plurality of tokens for each selected object based on contents of the selected object; calculating a hash signature for each selected object of the web resource using the identified plurality of tokens; identifying potentially malicious calls within the identified plurality of tokens; generating an image of the web resource based on the plurality of hash signatures and based on the identified potentially malicious calls, wherein the image of the web resource comprises a vector representation of the contents of the web resource; and detecting whether the web resource is modified based on the image of the web resource.
G06F 21/55 - Detecting local intrusion or implementing counter-measures
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
G06K 9/62 - Methods or arrangements for recognition using electronic means
G06N 7/00 - Computing arrangements based on specific mathematical models
G06F 16/56 - Information retrievalDatabase structures thereforFile system structures therefor of still image data having vectorial format