Ministerio de Industria, Turismo y Comercio LogoMinisterior
 

Alerta

Resultados 39 resultados
LastUpdate Última actualización 08/08/2025 [07:45:00]
pdfxls
Solicitudes publicadas en los últimos 30 días / Applications published in the last 30 days
previousPage Resultados 25 a 39 de 39  

USING A RECURRENT NEURAL NETWORK AND A CLASSIFICATION MACHINE LEARNING MODEL TO PREDICT ACTIONS IN SOFTWARE APPLICATIONS

NºPublicación:  US2025232178A1 17/07/2025
Solicitante: 
INTUIT INC [US]
INTUIT INC
US_2025232178_PA

Resumen de: US2025232178A1

Aspects of the present disclosure provide techniques for machine learning based action prediction. Embodiments include providing, as inputs to a recurrent neural network (RNN), an ordered sequence of strings representing actions performed by a user within a software application, the RNN having been trained through a supervised learning process to generate embeddings of the ordered sequence of strings and generate a numerical score relating to a target action based on the embeddings and an order of the ordered sequence of strings. Embodiments include providing, as respective inputs to a tree-based classification machine learning model, the numerical score and an additional feature relating to the user and receiving, as a respective output from the tree-based classification machine learning model in response to the respective inputs, a propensity score indicating a likelihood of the user to perform the target action.

ELECTRONIC DEVICE FOR ACQUIRING IMAGE AND OPERATION METHOD THEREFOR

NºPublicación:  WO2025150734A1 17/07/2025
Solicitante: 
SAMSUNG ELECTRONICS CO LTD [KR]
\uC0BC\uC131\uC804\uC790 \uC8FC\uC2DD\uD68C\uC0AC
WO_2025150734_A1

Resumen de: WO2025150734A1

In an embodiment, an electronic device may comprise at least one processor, and a memory for storing one or more instructions. The memory may further store a first sub-neural network model and a second sub-neural network model. The one or more instructions may be executed by the at least one processor so as to cause the electronic device to: acquire N image frames; acquire feature information for each of the image frames by inputting the image frames one by one to the first sub-neural network model; and acquire an enhanced image by inputting, to the second sub-neural network model, integrated feature information obtained by merging the feature information for each of the image frames. The first sub-neural network model may be configured to receive one image frame and output feature information. The second sub-neural network model may be configured to output an enhanced image on the basis of input feature information.

LANGUAGE MODEL NEURAL NETWORKS FOR DOMAIN-SPECIFIC CONVERSATIONAL AGENTS

NºPublicación:  WO2025151795A1 17/07/2025
Solicitante: 
GOOGLE LLC [US]
GOOGLE LLC
WO_2025151795_PA

Resumen de: WO2025151795A1

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training and use of a language model neural network as a domain-specific conversational agent for a particular domain.

INFERENCE METHOD AND APPARATUS FOR NEURAL NETWORK MODEL, AND RELATED DEVICE

NºPublicación:  EP4586109A1 16/07/2025
Solicitante: 
HUAWEI TECH CO LTD [CN]
Huawei Technologies Co., Ltd
EP_4586109_PA

Resumen de: EP4586109A1

This application provides an inference method for a neural network model. The method is applied to a computing cluster. The computing cluster includes a plurality of inference servers and a memory pool. Each inference server includes at least one inference card and a local memory. The method includes: A first inference card of a first inference server in the computing cluster receives an inference task. If a parameter for executing the inference task is not hit in the first inference card, the first inference card obtains the parameter from a local memory of the first server. If the parameter is not hit in the local memory of the first server, the first inference card obtains the parameter from the memory pool. The first inference card can execute the inference task based on all obtained parameters. A speed of obtaining the parameter by the first inference card can be improved based on a high-speed read/write capability of the local memory of the first inference server, to reduce a delay of obtaining the parameter by the first inference card, and meet a requirement for a low delay of executing the inference task. In addition, this application further provides a corresponding apparatus, computing cluster, and storage medium.

REFLEXIVE MODEL GRADIENT-BASED RULE LEARNING

NºPublicación:  US2025225386A1 10/07/2025
Solicitante: 
RAYTHEON COMPANY [US]
Raytheon Company
US_2025225386_PA

Resumen de: US2025225386A1

Systems, devices, methods, and computer-readable media for reflexive model generation and inference. A method includes receiving probabilistic rules of a reflexive model that correlate evidence with existence of an event of interest, training, based on ground truth examples of evidence and respective labels indicating whether the event of interest is/was/will be present or not, a neural network (NN) to encode the probabilistic rules and learn respective probabilities for the probabilistic rules, and providing, by the NN and responsive to new evidence, an output indicating a likelihood the event of interest exists.

Inference Method and Apparatus for Neural Network Model, and Related Device

NºPublicación:  US2025225419A1 10/07/2025
Solicitante: 
HUAWEI TECH CO LTD [CN]
Huawei Technologies Co., Ltd
US_2025225419_PA

Resumen de: US2025225419A1

A method is applied to a computing cluster. The computing cluster includes a plurality of inference servers and a memory pool. Each inference server includes at least one inference card and a local memory. The method includes: a first inference card of a first inference server in the computing cluster receives an inference task. If a parameter for executing the inference task is not hit in the first inference card, the first inference card obtains the parameter from a local memory of the first server. If the parameter is not hit in the local memory of the first server, the first inference card obtains the parameter from the memory pool. The first inference card can execute the inference task based on all obtained parameters.

Generation Of Graph-Based Dense Representations Of Events Of A Nodal Graph Through Deployment Of A Neural Network

NºPublicación:  US2025225183A1 10/07/2025
Solicitante: 
CISCO TECH INC [US]
Cisco Technology, Inc
US_2025225183_PA

Resumen de: US2025225183A1

A computerized method is disclosed that includes operations of receiving a plurality of alerts, generating a graph-based dense representation of each alert of the plurality of alerts including processing of each alert with a neural network, wherein a result of processing an individual alert by the neural network is a graph-based dense representation of the individual alert, computing relatedness scores between at least a subset of the plurality of alerts, and generating a graphical user interface illustrating a listing of at least a subset of the plurality of alerts, wherein the graphical user interface is configured to receive user input corresponding to selection of a first alert, wherein the graphical user interface is rendered on a display screen. Additionally, an additional operation may include training the neural network to produce graph-based dense representations, wherein the training is performed on a corpus of metapaths.

METHODS AND SYSTEMS FOR CLASSIFICATION OF EGGS AND EMBRYOS USING MORPHOLOGICAL AND MORPHO-KINETIC SIGNATURE

NºPublicación:  US2025225798A1 10/07/2025
Solicitante: 
FAIRTILITY LTD [IL]
FAIRTILITY LTD
US_2025225798_PA

Resumen de: US2025225798A1

Methods and systems are described for classifying unfertilized eggs. For example, using control circuitry, first images of fertilized eggs can be received, and the first images can be labeled with known classifications. Using the control circuitry, an artificial neural network can be trained to detect the known classifications based on the first images of the fertilized eggs and a second image can be received of an unfertilized egg with an unknown classification. Using the control circuitry, the second image can be input into the trained artificial neural network and a prediction from the trained artificial neural network can be received that the second image corresponds to one or more of the known classifications.

APPARATUS AND METHOD FOR MONITORING AND CONTROLLING OF A NEURAL NETWORK USING ANOTHER NEURAL NETWORK IMPLEMENTED ON ONE OR MORE SOLID-STATE CHIPS

NºPublicación:  US2025224724A1 10/07/2025
Solicitante: 
APEX AI IND LLC [US]
Apex AI Industries, LLC
US_2025224724_PA

Resumen de: US2025224724A1

A method of operating an apparatus using a control system that includes at least one neural network. The method includes receiving an input value captured by the apparatus, processing the input value using the at least one neural network of the control system implemented on first one or more solid-state chips, and obtaining an output from the at least one neural network resulting from processing the input value. The method may also include processing the output with another neural network implemented on solid-state chips to determine whether the output breaches a predetermined condition that is unchangeable after an initial installation onto the control system. The aforementioned another neural network is prevented from being retrained. The method may also include the step of using the output from the at least one neural network to control the apparatus unless the output breaches the predetermined condition. Similar corresponding apparatuses are described.

INTEGRATION OF LEARNED DIFFERENTIABLE LOSS FUNCTIONS IN DEEP LEARNING MODELS

NºPublicación:  US2025225384A1 10/07/2025
Solicitante: 
MICROSOFT TECH LICENSING LLC [US]
Microsoft Technology Licensing, LLC
US_2025225384_PA

Resumen de: US2025225384A1

Systems and methods are disclosed herein for training a model with a learned loss function. In an example system, a first trained neural network is generated based on application of a first loss function, such as a predefined loss function. A set of values is extracted from one or more of the layers of the neural network model, such as the weights of one of the layers. A separate machine learning model is trained using the set of values and a set of labels (e.g., ground truth annotations for a set of data). The machine learning model outputs a symbolic equation based on the training. The symbolic equation is applied to the first trained neural network to generate a second trained neural network. In this manner, a learned loss function can be generated and used to train a neural network, resulting in improved performance of the neural network.

REMOTE DISTRIBUTION OF NEURAL NETWORKS

NºPublicación:  US2025225614A1 10/07/2025
Solicitante: 
SNAP INC [US]
Snap Inc
US_2025225614_PA

Resumen de: US2025225614A1

Remote distribution of multiple neural network models to various client devices over a network can be implemented by identifying a native neural network and remotely converting the native neural network to a target neural network based on a given client device operating environment. The native neural network can be configured for execution using efficient parameters, and the target neural network can use less efficient but more precise parameters.

DATA-DRIVEN STATE ESTIMATION AND SYSTEM CONTROL UNDER UNCERTAINTY

NºPublicación:  WO2025146824A1 10/07/2025
Solicitante: 
MITSUBISHI ELECTRIC CORP [JP]
MITSUBISHI ELECTRIC CORPORATION
WO_2025146824_PA

Resumen de: WO2025146824A1

A control method for controlling an electro-mechanical system according to a task estimates the state of the system using an adaptive surrogate model of the system to produce an estimation of the state of the system. The adaptive surrogate model includes a neural network employing a weighted combination of neural ODEs of dynamics of the system in latent space, such that weights of the weighted combination of neural ODEs represent the uncertainty. The method controls the system according to the task based on the estimation of the state of the system and tunes the weights of the weighted combination of neural ODEs based on the controlling.

INTEGRATION OF LEARNED DIFFERENTIABLE LOSS FUNCTIONS IN DEEP LEARNING MODELS

NºPublicación:  EP4583004A1 09/07/2025
Solicitante: 
MICROSOFT TECHNOLOGY LICENSING LLC [US]
Microsoft Technology Licensing, LLC
EP_4583004_PA

Resumen de: EP4583004A1

Systems and methods are disclosed herein for training a model with a learned loss function. In an example system, a first trained neural network is generated based on application of a first loss function, such as a predefined loss function. A set of values is extracted from one or more of the layers of the neural network model, such as the weights of one of the layers. A separate machine learning model is trained using the set of values and a set of labels (e.g., ground truth annotations for a set of data). The machine learning model outputs a symbolic equation based on the training. The symbolic equation is applied to the first trained neural network to generate a second trained neural network. In this manner, a learned loss function can be generated and used to train a neural network, resulting in improved performance of the neural network.

METHOD AND SYSTEM FOR AUGMENTED SPEECH EMBEDDINGS BASED AUTOMATIC SPEECH RECOGNITION

Nº publicación: EP4583099A1 09/07/2025

Solicitante:

TATA CONSULTANCY SERVICES LTD [IN]
Tata Consultancy Services Limited

EP_4583099_PA

Resumen de: EP4583099A1

Though several data augmentation techniques have been explored in the signal or feature space, very few studies have explored augmentation in the embedding space for Automatic Speech Recognition (ASR). The outputs of the hidden layers of a neural network can be seen as different representations or projections of the features. The augmentations performed on the features may not necessarily translate into augmentation of the different projections of the features as obtained from the output of the hidden layers. To overcome the challenges of the conventional approaches, embodiments herein provide a method and system for augmented speech embeddings based automatic speech recognition. The present disclosure provides an augmentation scheme which works on the speech embeddings. The augmentation works by replacing a set of randomly selected embeddings by noise during training. It does not require additional data, works online during training, and adds very little to the overall computational cost.

traducir