Ministerio de Industria, Turismo y Comercio LogoMinisterior
 

Alerta

Resultados 31 resultados
LastUpdate Última actualización 03/08/2025 [07:12:00]
pdfxls
Solicitudes publicadas en los últimos 30 días / Applications published in the last 30 days
previousPage Resultados 25 a 31 de 31  

Generation Of Graph-Based Dense Representations Of Events Of A Nodal Graph Through Deployment Of A Neural Network

NºPublicación:  US2025225183A1 10/07/2025
Solicitante: 
CISCO TECH INC [US]
Cisco Technology, Inc
US_2025225183_PA

Resumen de: US2025225183A1

A computerized method is disclosed that includes operations of receiving a plurality of alerts, generating a graph-based dense representation of each alert of the plurality of alerts including processing of each alert with a neural network, wherein a result of processing an individual alert by the neural network is a graph-based dense representation of the individual alert, computing relatedness scores between at least a subset of the plurality of alerts, and generating a graphical user interface illustrating a listing of at least a subset of the plurality of alerts, wherein the graphical user interface is configured to receive user input corresponding to selection of a first alert, wherein the graphical user interface is rendered on a display screen. Additionally, an additional operation may include training the neural network to produce graph-based dense representations, wherein the training is performed on a corpus of metapaths.

REMOTE DISTRIBUTION OF NEURAL NETWORKS

NºPublicación:  US2025225614A1 10/07/2025
Solicitante: 
SNAP INC [US]
Snap Inc
US_2025225614_PA

Resumen de: US2025225614A1

Remote distribution of multiple neural network models to various client devices over a network can be implemented by identifying a native neural network and remotely converting the native neural network to a target neural network based on a given client device operating environment. The native neural network can be configured for execution using efficient parameters, and the target neural network can use less efficient but more precise parameters.

REFLEXIVE MODEL GRADIENT-BASED RULE LEARNING

NºPublicación:  US2025225386A1 10/07/2025
Solicitante: 
RAYTHEON COMPANY [US]
Raytheon Company
US_2025225386_PA

Resumen de: US2025225386A1

Systems, devices, methods, and computer-readable media for reflexive model generation and inference. A method includes receiving probabilistic rules of a reflexive model that correlate evidence with existence of an event of interest, training, based on ground truth examples of evidence and respective labels indicating whether the event of interest is/was/will be present or not, a neural network (NN) to encode the probabilistic rules and learn respective probabilities for the probabilistic rules, and providing, by the NN and responsive to new evidence, an output indicating a likelihood the event of interest exists.

METHOD AND SYSTEM FOR AUGMENTED SPEECH EMBEDDINGS BASED AUTOMATIC SPEECH RECOGNITION

NºPublicación:  EP4583099A1 09/07/2025
Solicitante: 
TATA CONSULTANCY SERVICES LTD [IN]
Tata Consultancy Services Limited
EP_4583099_PA

Resumen de: EP4583099A1

Though several data augmentation techniques have been explored in the signal or feature space, very few studies have explored augmentation in the embedding space for Automatic Speech Recognition (ASR). The outputs of the hidden layers of a neural network can be seen as different representations or projections of the features. The augmentations performed on the features may not necessarily translate into augmentation of the different projections of the features as obtained from the output of the hidden layers. To overcome the challenges of the conventional approaches, embodiments herein provide a method and system for augmented speech embeddings based automatic speech recognition. The present disclosure provides an augmentation scheme which works on the speech embeddings. The augmentation works by replacing a set of randomly selected embeddings by noise during training. It does not require additional data, works online during training, and adds very little to the overall computational cost.

INTEGRATION OF LEARNED DIFFERENTIABLE LOSS FUNCTIONS IN DEEP LEARNING MODELS

NºPublicación:  EP4583004A1 09/07/2025
Solicitante: 
MICROSOFT TECHNOLOGY LICENSING LLC [US]
Microsoft Technology Licensing, LLC
EP_4583004_PA

Resumen de: EP4583004A1

Systems and methods are disclosed herein for training a model with a learned loss function. In an example system, a first trained neural network is generated based on application of a first loss function, such as a predefined loss function. A set of values is extracted from one or more of the layers of the neural network model, such as the weights of one of the layers. A separate machine learning model is trained using the set of values and a set of labels (e.g., ground truth annotations for a set of data). The machine learning model outputs a symbolic equation based on the training. The symbolic equation is applied to the first trained neural network to generate a second trained neural network. In this manner, a learned loss function can be generated and used to train a neural network, resulting in improved performance of the neural network.

CPU DEEP NEURAL NETWORK INFERENCE METHOD BASED ON IDLE CPU RESOURCES TO MAXIMIZE RESOURCE UTILIZATION IN LEARNING CLUSTERS

Nº publicación: KR20250102720A 07/07/2025

Solicitante:

성균관대학교산학협력단

WO_2025143765_PA

Resumen de: WO2025143765A1

A deep neural network inference method based on an idle CPU resource according to the present specification may comprise the steps of: when a training task is reserved and all GPUs are used, classifying a CPU core into a training task-specific group, and classifying an unallocated idle CPU core into an unallocated group (U group); executing a training task by using a CPU core of the training task-specific group and, when there is a request for an online inference task, executing the online inference task by using the idle CPU core of the U group; and when there is a request for a batch inference task, additionally executing the batch inference task by using at least one of the idle CPU core of the U group and an idle CPU core of the training task-specific group.

traducir