Resumen de: US2025190797A1
A computer system comprising a processor and a memory storing instructions that, when executed by the processor, cause the computer system to perform a set of operations. The set of operations comprises collecting domain attribute data comprising one or more domain attribute features for a domain, collecting sampled domain profile data comprising one or more domain profile features for the domain and generating, using the domain attribute data and the sampled domain profile data, a domain reputation assignment utilizing a neural network.
Resumen de: US2025190796A1
Computer systems and computer-implemented methods modify a machine learning network, such as a deep neural network, to introduce judgment to the network. A “combining” node is added to the network, to thereby generate a modified network, where activation of the combining node is based, at least in part, on output from a subject node of the network. The computer system then trains the modified network by, for each training data item in a set of training data, performing forward and back propagation computations through the modified network, where the backward propagation computation through the modified network comprises computing estimated partial derivatives of an error function of an objective for the network, except that the combining node selectively blocks back-propagation of estimated partial derivatives to the subject node, even though activation of the combining node is based on the activation of the subject node.
Resumen de: US2025191355A1
Systems and methods for image modification to increase contrast between text and non-text pixels within the image. In one embodiment, an original document image is scaled to a predetermined size for processing by a convolutional neural network. The convolutional neural network identifies a probability that each pixel in the scaled is text and generates a heat map of these probabilities. The heat map is then scaled back to the size of the original document image, and the probabilities in the heat map are used to adjust the intensities of the text and non-text pixels. For positive text, intensities of text pixels are reduced and intensities of non-text pixels are increased in order to increase the contrast of the text against the background of the image. Optical character recognition may then be performed on the contrast-adjusted image.
Resumen de: US2025191695A1
We propose a neural network-implemented method for base calling analytes. The method includes accessing a sequence of per-cycle image patches for a series of sequencing cycles, where pixels in the image patches contain intensity data for associated analytes, and applying three-dimensional (3D) convolutions on the image patches on a sliding convolution window basis such that, in a convolution window, a 3D convolution filter convolves over a plurality of the image patches and produces at least one output feature. The method further includes beginning with output features produced by the 3D convolutions as starting input, applying further convolutions and producing final output features and processing the final output features through an output layer and producing base calls for one or more of the associated analytes to be base called at each of the sequencing cycles.
Resumen de: US2025181897A1
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating output sequences using a non-auto-regressive neural network.
Resumen de: US2025181912A1
Disclosed herein is the framework of causal cooperative networks that discovers the causal relationship between observational data in a dataset and a label of the observation thereof and trains each model with inference of a causal explanation, reasoning, and production. In the case of the supervised learning, neural networks are adjusted through the prediction of the label for observation inputs. On the other hand, a causal cooperative network that includes the explainer, a reasoner, and a producer neural network models, receives an observation and a label as a pair, results multiple outputs, and calculates a set of losses of inference, generation, and reconstruction from the input and the outputs. The explainer, the reasoner, and the producer are adjusted by error propagation for each model obtained from the set of losses.
Resumen de: WO2025109032A2
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating audio and, optionally, a corresponding image using generative neural networks. For example, a spectrogram of the audio can be generated using a hierarchy of diffusion neural networks.
Resumen de: US2025172933A1
A graph has nodes which represent entities of the industrial system and edges representing relations between the entities of the industrial system. A graph neural network processing the graph calculates a class prediction for at least one entity of the industrial system. A sub-symbolic explainer processes the class prediction to identify edges between nodes and associated features of nodes belonging to a sub-graph within the graph having influenced the class prediction. A large language model, equipped with a plugin for accessing the graph and receiving a prompt including the sub-graph, transforms the sub-graph into a maintenance justification in natural language. A user interface outputs a predictive maintenance alert along with the maintenance justification.
Resumen de: US2025173567A1
One embodiment provides for a computer-readable medium storing instructions that cause one or more processors to perform operations comprising determining a per-layer scale factor to apply to tensor data associated with layers of a neural network model and converting the tensor data to converted tensor data. The tensor data may be converted from a floating point datatype to a second datatype that is an 8-bit datatype. The instructions further cause the one or more processors to generate an output tensor based on the converted tensor data and the per-layer scale factor.
Resumen de: US2025173563A1
A deep learning model implements continuous, lifelong machine learning (LML) based on a Bayesian neural network using a framework including wide, deep, and prior components that use available real-world healthcare data differently to improve prediction performance. The outputs from each component of the framework are combined to produce a final output that may be utilized as a prior structure when the deep learning model is refreshed with new data in a deep learning process. Lifelong learning is implemented by dynamically integrating present learning from the wide and deep learning components with past learning from models in the prior component into future predictions. The Bayesian deep neural network-based LML model increases accuracy in identifying patient profiles by continuously learning, as new data becomes available, without forgetting prior knowledge.
Resumen de: US2025174225A1
A method for detection of a keyword in a continuous stream of audio signal, by using a dilated convolutional neural network, implemented by one or more computers embedded on a device, the dilated convolutional network comprising a plurality of dilation layers, including an input layer and an output layer, each layer of the plurality of dilation layers comprising gated activation units, and skip-connections to the output layer, the dilated convolutional network being configured to generate an output detection signal when a predetermined keyword is present in the continuous stream of audio signal, the generation of the output detection signal being based on a sequence of successive measurements provided to the input layer, each successive measurement of the sequence being measured on a corresponding frame from a sequence of successive frames extracted from the continuous stream of audio signal, at a plurality of successive time steps.
Resumen de: US2024161736A1
A neural method model is trained by, in an initial training iteration, training the neural network model in a teacher forcing mode in which an autoregressive channel includes a ground-truth shifted waveform, and outputting predictions of the neural network model; and in at least one additional training iteration, replacing the ground-truth shifted waveform in the autoregressive channel with the predictions of the neural network model obtained in a previous training iteration. An inference may then be performed by providing, for the neural network model, an additional channel containing at least one prediction of the neural network model outputted during training; and performing speech enhancement using the neural network model.
Resumen de: US2025165789A1
A Sound effect recommendation network is trained using a machine learning algorithm with a reference image, a positive audio embedding and a negative audio embedding as inputs to train a visual-to-audio correlation neural network to output a smaller distance between the positive audio embedding and the reference image than the negative audio embedding and the reference image. The visual-to-audio correlation neural network is trained to identify one or more visual elements in the reference image and map the one or more visual elements to one or more sound categories or subcategories within an audio database.
Resumen de: US2025166115A1
Embodiments provide mechanisms to facilitate compute operations for deep neural networks. One embodiment comprises a graphics processing unit comprising one or more multiprocessors, at least one of the one or more multiprocessors including a register file to store a plurality of different types of operands and a plurality of processing cores. The plurality of processing cores includes a first set of processing cores of a first type and a second set of processing cores of a second type. The first set of processing cores are associated with a first memory channel and the second set of processing cores are associated with a second memory channel.
Resumen de: US2025165772A1
Techniques are disclosed relating to using graph neural networks to identify locations in a hierarchical data set. In various embodiments, a computing system receives a request to identify, in a data set having a hierarchical structure, one or more locations corresponding to a description specified by the request. The computing system assembles, from the data set, a graph data structure that includes nodes corresponding to locations in the data set and interconnected by edges preserving the hierarchical structure. The computing system applies a graph neural network algorithm to the graph data structure to generate location embeddings for the nodes and identifies the one or more locations by determining similarities between the generated location embeddings and a description embedding representative of the description.
Resumen de: US2025165769A1
A method for balancing utilization of tiles in an analog in-memory computing system includes identifying, by a computer processor, a plurality of tiles in the analog in-memory computing system. The computer processor receives a plurality of layers in a neural network being processed by the analog in-memory computing system. The computer processor maps the plurality of layers in the neural network to the plurality of tiles. The computer processor determines a number of operations for each of the tiles in the plurality of tiles. The computer processor determines an equalized utilization rate for the tiles in the plurality of tiles. In addition, the computer processor assigns the layers to the plurality of tiles. The tiles are assigned so that a first utilization rate of a first tile is balanced relative to a second utilization rate of a second tile in the analog in-memory computing system.
Nº publicación: US2025165818A1 22/05/2025
Solicitante:
UNIV NORTH CAROLINA STATE [US]
North Carolina State University
Resumen de: US2025165818A1
Various examples are provided related to side-channel awareness. In one example, a method for side-channel awareness training includes generating trained models by stochastically training neural network models using a common training dataset; generating an inference model based upon random selection of parameters from one or more of the trained models; and training the inference model with the selected parameters. The trained inference model can be executed on an edge Tensor Processing Unit (TPU). The models can be trained offline. An input signal can be processed using the trained inference model to generate an output signal for transmission.