Ministerio de Industria, Turismo y Comercio LogoMinisterior
 

Neural networks

Resultados 35 results.
LastUpdate Updated on 23/10/2025 [07:39:00]
pdfxls
Solicitudes publicadas en los últimos 30 días / Applications published in the last 30 days
previousPage Results 25 to 35 of 35  

SOLUTIONS DELIVERY - SOLUTIONS DISCOVERY TOOL

Publication No.:  US2025299067A1 25/09/2025
Applicant: 
TRUIST BANK [US]
Truist Bank
US_2025299067_PA

Absstract of: US2025299067A1

A system and method for automatically providing a bank agent with questions to ask a client of the bank based on known information about the client and answers to previous questions provided to the client, and then providing a financial solution or product that may help the client. The method includes asking the client an initial question, providing an answer by the client to the initial question, providing a follow-up question in response to the answer provided to the initial question that is generated by a machine learning model in a processor, accepting an answer to the follow-up question, and providing additional follow-up questions in response to previous questions and answers that are generated by the machine learning model, where the machine learning model uses at least one neural network having nodes that have been trained to provide the questions based on the previous questions and answers.

FEATURE-SPECIFIC ATTENTION ARRAYS FOR EVENT SEQUENCE CHARACTERIZATION

Publication No.:  US2025299066A1 25/09/2025
Applicant: 
CAPITAL ONE SERVICES LLC [US]
Capital One Services, LLC
US_2025299066_PA

Absstract of: US2025299066A1

A method and related system for efficiently capturing relationships between event feature values in embeddings includes flattening an event sequence into a feature sequence including a first event prefix, a second event prefix, and a first set of feature values. The method includes generating an attention mask including first mask indicators to associate the first set of feature values with each other and second mask indicator to associate a first feature value of the first set of feature values with the second event prefix. The method includes providing the feature sequence and the attention mask to a self-attention neural network model to generate an embedding.

CUSTOMIZED MACHINE LEARNING MODELS

Publication No.:  US2025299041A1 25/09/2025
Applicant: 
AMAZON TECH INC [US]
Amazon Technologies, Inc
US_12354002_PA

Absstract of: US2025299041A1

An adapter layer may be used to customize a machine learning component by transforming data flowing into, out of, and/or within the machine learning component. The adapter layer may include a number of neural network components, or “adapters,” configured to perform a transformation on input data. Neural network components may be configured into adapter groups. A router component can, based on the input data, select one or more neural network components for transforming the input data. The input layer may combine the results of any such transformations to yield adapted data. Different adapter groups can include adapters of different complexity (e.g., involving different amounts of computation and/or latency). Thus, the amount of computation or latency added by an adapter layer can be reduced for simpler transformations of the input data.

FEATURE-SPECIFIC ATTENTION ARRAYS FOR EVENT SEQUENCE CHARACTERIZATION

Publication No.:  WO2025199173A1 25/09/2025
Applicant: 
CAPITAL ONE SERVICES LLC [US]
CAPITAL ONE SERVICES, LLC
US_2025299066_PA

Absstract of: WO2025199173A1

A method and related system for efficiently capturing relationships between event feature values in embeddings includes flattening an event sequence into a feature sequence including a first event prefix, a second event prefix, and a first set of feature values. The method includes generating an attention mask including first mask indicators to associate the first set of feature values with each other and second mask indicator to associate a first feature value of the first set of feature values with the second event prefix. The method includes providing the feature sequence and the attention mask to a self-attention neural network model to generate an embedding.

REAL-TIME DETECTION OF NETWORK THREATS USING A GRAPH-BASED MODEL

Publication No.:  WO2025199388A1 25/09/2025
Applicant: 
UNIV ILLINOIS [US]
THE BOARD OF TRUSTEES OF THE UNIVERSITY OF ILLINOIS
WO_2025199388_PA

Absstract of: WO2025199388A1

The present disclosure gives methods and systems to perform intrusion detection on a computing system using streaming embedding and detection alongside other improvements. Intrusion detection may be implemented by recording events occurring within a computing system in an audit log. From this audit log, a provenance graph representing the events and causal relationships of the events occurring within the computing system may be generated. The provenance graph may be supplemented, by a pseudo-graph that connects each event occurring in the computing system to one or more root causes. Then, a neural network may be trained to represent behavior of the computing system based on this pseudo-graph. The present disclosure also gives other systems and methods of intrusion detection and modeling computing system behavior.

VIDEO UPSAMPLING USING ONE OR MORE NEURAL NETWORKS

Publication No.:  US2025299295A1 25/09/2025
Applicant: 
NVIDIA CORP [US]
NVIDIA Corporation
US_12394015_PA

Absstract of: US2025299295A1

Apparatuses, systems, and techniques to enhance video are disclosed. In at least one embodiment, one or more neural networks are used to create a higher resolution video using upsampled frames from a lower resolution video.

INFORMATION PROCESSING APPARATUS, INFERENCE METHOD, AND STORAGE MEDIUM

Publication No.:  US2025299051A1 25/09/2025
Applicant: 
CANON KK [JP]
CANON KABUSHIKI KAISHA

Absstract of: US2025299051A1

An information processing apparatus configured to execute inference using a convolutional neural network, including: an obtainment unit configured to obtain target data from data for inference inputted in the information processing apparatus; and a computation unit configured to execute convolutional computation and output computation result data, the convolutional computation using computation data including the target data obtained by the obtainment unit and margin data different from the target data that is required to obtain the computation result data in a predetermined size, in which the obtainment unit obtains first data, which is a part of the margin data, from a data group existing around the target data separately from the target data in the data for inference and doses not obtain second data, which is the margin data except the first data, from the data group.

SYSTEM AND METHODS FOR DISTRIBUTED LEARNING IN A RADIO ACCESS NETWORK WITH MULTIPLE AGENTS

Publication No.:  WO2025194307A1 25/09/2025
Applicant: 
HUAWEI TECH CO LTD [CN]
HUAWEI TECHNOLOGIES CO., LTD
WO_2025194307_PA

Absstract of: WO2025194307A1

A network node is configured to receive global network states (GNSs) from a central node, each GNS representing the state of a global environment of the network for a time-based criterion. The network node then generates multiple local neural network (NN) models, each corresponding to at least one GNS, and is configured to receive a local observation vector (OV) based on an environment state of the ancillary node and generate a local action vector (AV) accordingly. The network node also receives environment states from neighbor nodes, combines them with its own environment state to identify a specific GNS, selects a local NN model that corresponds to that GNS, and uses it to generate the local AV for application to its environment. The network states can include the demands on the resources at the network node and the local AV can include an adjustment in the allocation of the resources according to the demands.

DYNAMIC PRECISION FOR NEURAL NETWORK COMPUTE OPERATIONS

Publication No.:  US2025299032A1 25/09/2025
Applicant: 
INTEL CORP [US]
Intel Corporation
ES_2986903_T3

Absstract of: US2025299032A1

In an example, an apparatus comprises a compute engine comprising a high precision component and a low precision component; and logic, at least partially including hardware logic, to receive instructions in the compute engine; select at least one of the high precision component or the low precision component to execute the instructions; and apply a gate to at least one of the high precision component or the low precision component to execute the instructions. Other embodiments are also disclosed and claimed.

EFFICIENCY ADJUSTABLE SPEECH RECOGNITION SYSTEM

Nº publicación: EP4621769A2 24/09/2025

Applicant:

MICROSOFT TECHNOLOGY LICENSING LLC [US]
Microsoft Technology Licensing, LLC

EP_4621769_PA

Absstract of: EP4621769A2

A computing system is configured to generate a transformer-transducer-based deep neural network. The transformer-transducer-based deep neural network comprises a transformer encoder network and a transducer predictor network. The transformer encoder network has a plurality of layers, each of which includes a multi-head attention network sublayer and a feed-forward network sublayer. The computing system trains an end-to-end (E2E) automatic speech recognition (ASR) model, using the transformer-transducer-based deep neural network. The E2E ASR model has one or more adjustable hyperparameters that are configured to dynamically adjust an efficiency or a performance of E2E ASR model when the E2E ASR model is deployed onto a device or executed by the device.

traducir