Absstract of: CN119156615A
Systems, methods, and computer program products may (i) obtain a graph comprising a plurality of edges and a plurality of nodes for the plurality of edges, each tagged node in a subset of the tagged nodes being associated with a tag, and each untagged node in a subset of the untagged nodes being not associated with a tag; (ii) training a graph neural network (GNN) using the graph and the label for each marked node, wherein training the GNN generates a prediction for each node; (iii) generating a candidate pool of candidate nodes from the subset of unlabeled nodes; (iv) generating a predictive tag for each candidate node using a tag propagation algorithm; (v) selecting a candidate node in the candidate pool of candidate nodes associated with a maximum mixed entropy reduction of the graph; and (vi) providing the selected candidate nodes for marking.
Absstract of: WO2023215214A1
Systems, methods, and computer program products are provided for saving memory during training of knowledge graph neural networks. The method includes receiving a training dataset including a first set of knowledge graph embeddings associated with a plurality of entities for a first layer of a knowledge graph, inputting the training dataset into a knowledge graph neural network to generate at least one further set of knowledge graph embeddings associated with the plurality of entities for at least one further layer of the knowledge graph, quantizing the at least one further set of knowledge graph embeddings to provide at least one set of quantized knowledge graph embeddings, storing the at least one set of quantized knowledge graph embeddings in a memory, and dequantizing the at least one set of quantized knowledge graph embeddings to provide at least one set of dequantized knowledge graph embeddings.
Absstract of: GB2633414A
A computer-implemented method involves processing a received input data item using an epistemic neural network of a trained machine learning model. A plurality of possible beliefs from the input data item are derived, e.g. using the epistemic neural network and a translator module. A probability associated with each possible belief is calculated, preferably using a reasoning module to generate a trigger graph. A certainty associated with the probability for each possible belief is calculated, preferably using a d-NNF module. In some embodiments, activation of each output neuron j in the epistemic neural network is the parameter aj of a K-dimensional Dirichlet distribution. In some embodiments, one of the possible beliefs is selected based on the calculated probabilities and certainties, e.g. by selecting a belief with probability and certainty values greater than threshold values. An electronic device may be controlled based on the selected belief.
Absstract of: US2025078998A1
According to various embodiments, a machine-learning based system for mental health disorder identification and monitoring is disclosed. The system includes one or more processors configured to interact with a plurality of wearable medical sensors (WMSs). The processors are configured to receive physiological data from the WMSs. The processors are further configured to train at least one neural network based on raw physiological data augmented with synthetic data and subjected to a grow-and-prune paradigm to generate at least one mental health disorder inference model. The processors are also configured to output a mental health disorder-based decision by inputting the received physiological data into the generated mental health disorder inference model.
Absstract of: US2025078809A1
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating speech from text. One of the systems includes one or more computers and one or more storage devices storing instructions that when executed by one or more computers cause the one or more computers to implement: a sequence-to-sequence recurrent neural network configured to: receive a sequence of characters in a particular natural language, and process the sequence of characters to generate a spectrogram of a verbal utterance of the sequence of characters in the particular natural language; and a subsystem configured to: receive the sequence of characters in the particular natural language, and provide the sequence of characters as input to the sequence-to-sequence recurrent neural network to obtain as output the spectrogram of the verbal utterance of the sequence of characters in the particular natural language.
Absstract of: US2025077859A1
Methods are provided for generating distilled multimedia data sets tailored to user's persona and/or task(s) to be performed associated with an enterprise network and enable interactive contextual learning using a multi-modal knowledge graph. Methods involve obtaining multimedia data from one or more data sources related to operation or configuration of an enterprise network and determining context for generating a distilled multimedia data set based on at least one of user input and user persona. The methods further involve generating, based on the context, the distilled multimedia data set that includes a set of multimedia slices generated from the multimedia data using a multi-modal knowledge graph. The multi-modal knowledge graph is generated using a graph neural network and indicates relationships among a plurality of slices of the multimedia data. The methods further involve providing the distilled multimedia data set for performing one or more actions associated with the enterprise network.
Absstract of: WO2025047998A1
A processor of an electronic device according to an embodiment may be configured to acquire, from a first neural network to which a speech signal received via a microphone has been input, a first sequence of feature information about portions, corresponding to designated frame units, of the speech signal. The processor may be configured to acquire, from a second neural network to which a designated text has been input, a second sequence of one or more phonetic symbols for the designated text. The processor may be configured to acquire, from a third neural network to which the first sequence and the second sequence have been input, a first parameter indicating the degree of correspondence between the voice signal and the designated text. The processor may be configured to acquire one or more second parameters indicating the degree of correspondence the portions of the voice signal have with respect to each of the one or more phonetic symbols included in the second sequence.
Absstract of: WO2025045736A1
A computer-implemented method for improving entity matching in a probabilistic matching engine can train a graph neural network (GNN) model on an output of a probabilistic matching engine to perform entity matching and determine counterfactual explanations for non-matches of entities. A list of data transformations can be identified by actionable recourse using the GNN model. The list of data transformations can be ranked, using the GNN model, based on computational overhead and an estimated improvement in entity matching within the probabilistic matching engine.
Absstract of: WO2025049978A1
Presented herein are systems and methods for the use and/or automated design of neural networks (NNs) with layer shared architecture (e.g., an intelligent layer shared (ILASH) neural architecture). In certain embodiments, the NN with layer shared architecture comprises a base set of layers (e.g., base layer shared model) and one or more branches extending from the base set of layers. Each branch may include one or more layers from the base set of layers and one or more additional layers different from the base set of layers, each branch designed and trained to perform a particular unique task on a common set of input data. As a result, the NN will share some layers among multiple tasks. Moreover, presented herein are techniques for using a predictive neural network search algorithm to create the branched network of the layer shared architecture.
Absstract of: US2025077877A1
A computer-implemented method for improving entity matching in a probabilistic matching engine can train a graph neural network (GNN) model on an output of a probabilistic matching engine to perform entity matching and determine counterfactual explanations for non-matches of entities. A list of data transformations can be identified by actionable recourse using the GNN model. The list of data transformations can be ranked, using the GNN model, based on computational overhead and an estimated improvement in entity matching within the probabilistic matching engine.
Absstract of: US2025077898A1
A computer-implemented method comprising a mathematical formulation of the social energy flow of Karma and a machine learning framework combining mathematical graph computation, which form a global sentiment aggregator for the measurement of social energy flows and which is formulated as a data processing, filtering and sampling framework which fuses structural, interpretable graph machine learning with graph neural networks and introduces a Graph Attention Mechanism (GAM) whereby machine learning guides an interpretable graph computation substrate in order to generate general, structured predictions for the past, present and future. This enables the framework to handle large data sets on a global scale in the graph neural network while affording interpretability within the less scalable graph computational substrate which in turn optimizes the use of memory, computer storage and processing power over traditional designs, thereby making global-scale computations on commodity hardware feasible.
Absstract of: WO2025048121A1
A processor of an electronic device according to an embodiment may be configured to acquire, from a first neural network to which a speech signal received via a microphone has been input, a first sequence of feature information about portions of the speech signal corresponding to designated frame units. The processor may be configured to acquire, from a second neural network to which the designated text has been input, a second sequence of one or more phonetic symbols for the designated text. The processor may be configured to acquire, from a third neural network to which the first sequence and the second sequence have been input, a first parameter indicating the degree of correspondence between the speech signal and the designated text. The processor may be configured to acquire one or more second parameters indicating the degree of correspondence between the portions of the speech signal and each of the one or more phonetic symbols included in the second sequence.
Nº publicación: WO2025045319A1 06/03/2025
Applicant:
MARINOM GMBH [DE]
MARINOM GMBH
Absstract of: WO2025045319A1
The invention relates to a method for explaining and/or verifying a behaviour of a neural network trained with a training database, said method having the following steps: - determining sensor information by means of a sensor, so that sensor information is available, - applying the neural network to the sensor information, so that target information is available, - determining similarity information, wherein a comparison is made of the sensor information or part of the sensor information with an information database, so that the similarity information is available, linking the similarity information with the target information, so that a target information tuple is available, on the basis of which the explanation and/or verification of the behaviour of the neural network can be realised.