Ministerio de Industria, Turismo y Comercio LogoMinisterior
 

Redes Neuronales

Resultados 20 resultados
LastUpdate Última actualización 01/09/2024 [07:56:00]
pdfxls
Solicitudes publicadas en los últimos 30 días / Applications published in the last 30 days
Resultados 1 a 20  

FORMULATION GRAPH FOR MACHINE LEARNING OF CHEMICAL PRODUCTS

NºPublicación:  US2024290440A1 29/08/2024
Solicitante: 
DOW GLOBAL TECH LLC [US]
Dow Global Technologies LLC
US_11862300_PA

Resumen de: US2024290440A1

Chemical formulations for chemical products can be represented by digital formulation graphs for use in machine learning models. The digital formulation graphs can be input to graph-based algorithms such as graph neural networks to produce a feature vector, which is a denser description of the chemical product than the digital formulation graph. The feature vector can be input to a supervised machine learning model to predict one or more attribute values of the chemical product that would be produced by the formulation without actually having to go through the production process. The feature vector can be input to an unsupervised machine learning model trained to compare chemical products based on feature vectors of the chemical products. The unsupervised machine learning model can recommend a substitute chemical product based on the comparison.

EXPLOITING INPUT DATA SPARSITY IN NEURAL NETWORK COMPUTE UNITS

NºPublicación:  US2024289285A1 29/08/2024
Solicitante: 
GOOGLE LLC [US]
Google LLC
KR_20240105502_A

Resumen de: US2024289285A1

A computer-implemented method includes receiving, by a computing device, input activations and determining, by a controller of the computing device, whether each of the input activations has either a zero value or a non-zero value. The method further includes storing, in a memory bank of the computing device, at least one of the input activations. Storing the at least one input activation includes generating an index comprising one or more memory address locations that have input activation values that are non-zero values. The method still further includes providing, by the controller and from the memory bank, at least one input activation onto a data bus that is accessible by one or more units of a computational array. The activations are provided, at least in part, from a memory address location associated with the index.

TRANSFORMER NEURAL NETWORK IN MEMORY

NºPublicación:  US2024289597A1 29/08/2024
Solicitante: 
MICRON TECH INC [US]
Micron Technology, Inc
CN_116235186_PA

Resumen de: US2024289597A1

Apparatuses and methods can be related to implementing a transformer neural network in a memory. A transformer neural network can be implemented utilizing a resistive memory array. The memory array can comprise programmable memory cells that can be programed and used to store weights of the transformer neural network and perform operations consistent with the transformer neural network.

METHOD AND APPARATUS FOR OPTIMIZING INFERENCE OF DEEP NEURAL NETWORKS

NºPublicación:  US2024289612A1 29/08/2024
Solicitante: 
INTEL CORP [US]
Intel Corporation
CN_117396889_PA

Resumen de: US2024289612A1

The application provides a hardware-aware cost model for optimizing inference of a deep neural network (DNN) comprising: a computation cost estimator configured to compute estimated computation cost based on input tensor, weight tensor and output tensor from the DNN; and a memory/cache cost estimator configured to perform memory/cache cost estimation strategy based on hardware specifications, wherein the hardware-aware cost model is used to perform performance simulation on target hardware to provide dynamic quantization knobs to quantization as required for converting a conventional precision inference model to an optimized inference model based on the result of the performance simulation.

ALWAYS-ON NEUROMORPHIC AUDIO PROCESSING MODULES AND METHODS

NºPublicación:  WO2024175770A1 29/08/2024
Solicitante: 
INNATERA NANOSYSTEMS B V [NL]
INNATERA NANOSYSTEMS B.V
WO_2024175770_A1

Resumen de: WO2024175770A1

Always-on audio processing requires low-power embedded system implementations. A neuromorphic approach to audio processing utilizing spiking neural networks implemented on a dedicated spiking neural network accelerator allows low-power realizations of typical audio pipelines. This disclosure describes modules and methods to implement such an embedded pipeline.

TRAINING APPARATUS AND METHOD FOR IMPLEMENTING EDGE ARTIFICIAL INTELLIGENCE DEVICE BY USING RESISTIVE ELEMENTS, AND ANALYSIS APPARATUS AND METHOD USING SAME

NºPublicación:  EP4421688A1 28/08/2024
Solicitante: 
POSTECH ACAD IND FOUND [KR]
Postech Academy-Industry Foundation
EP_4421688_A1

Resumen de: EP4421688A1

A learning apparatus and method for implementing an edge device using a resistive element and an analysis apparatus and method using the same are disclosed. The learning apparatus for implementing an edge device using a resistive element according to an embodiment of the present application may include a first learning unit determining a weight of an artificial neural network through learning based on first training data and reflecting the determined weight in a first resistive element, and a second learning unit updating the weight of the artificial neural network through learning based on second training data collected through the device and reflecting the updated weight in the second resistive element.

PROCESSING SEQUENCES USING CONVOLUTIONAL NEURAL NETWORKS

NºPublicación:  EP4421686A2 28/08/2024
Solicitante: 
DEEPMIND TECH LTD [GB]
DeepMind Technologies Limited
EP_4421686_A2

Resumen de: EP4421686A2

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating of generating a neural network output from an input sequence of audio data. One of the methods includes, for each of the inputs, providing a current input sequence that comprises the input and the inputs preceding the input in the input sequence to a convolutional subnetwork comprising a plurality of dilated convolutional neural network layers, wherein the convolutional subnetwork is configured to, for each of the plurality of inputs: receive the current input sequence for the input, and process the current input sequence to generate an alternative representation for the input; and providing the alternative representations to an output subnetwork, wherein the output subnetwork is configured to receive the alternative representations and to process the alternative representations to generate the neural network output. The method performs an audio data processing task.

METHOD AND SYSTEM FOR AUTOMATED CORRECTION AND/OR COMPLETION OF A DATABASE

NºPublicación:  US2024281422A1 22/08/2024
Solicitante: 
SIEMENS AG [DE]
Siemens Aktiengesellschaft
CN_117813597_PA

Resumen de: US2024281422A1

An auto-encoder model processes a datasets describing a physical part from a part catalogue in the form of a property co-occurrence graph is provided, and performs entity resolution and auto-completion on the co-occurrence graph in order to compute a corrected and/or completed dataset. The encoder includes a recurrent neural network and a graph attention network. The decoder contains a linear decoder for numeric values and a recurrent neural network decoder for strings. The auto-encoder model provides an automated end-to-end solution that can auto-complete missing information as well as correct data errors such as misspellings or wrong values. The auto-encoder model is capable of auto-completion for highly unaligned part specification data with missing values.

USING MULTIPLE TRAINED MODELS TO REDUCE DATA LABELING EFFORTS

NºPublicación:  US2024281431A1 22/08/2024
Solicitante: 
XEROX CORP [US]
Xerox Corporation
US_2023350880_A1

Resumen de: US2024281431A1

A method of labeling training data includes inputting a plurality of unlabeled input data samples into each of a plurality of pre-trained neural networks and extracting a set of feature embeddings from multiple layer depths of each of the plurality of pre-trained neural networks. The method also includes generating a plurality of clusterings from the set of feature embeddings. The method also includes analyzing, by a processing device, the plurality of clusterings to identify a subset of the plurality of unlabeled input data samples that belong to a same unknown class. The method also includes assigning pseudo-labels to the subset of the plurality of unlabeled input data samples.

DISTILLATION OF DEEP ENSEMBLES

NºPublicación:  US2024281649A1 22/08/2024
Solicitante: 
GE PREC HEALTHCARE LLC [US]
GE Precision Healthcare LLC

Resumen de: US2024281649A1

Systems/techniques that facilitate improved distillation of deep ensembles are provided. In various embodiments, a system can access a deep learning ensemble configured to perform an inferencing task. In various aspects, the system can iteratively distill the deep learning ensemble into a smaller deep learning ensemble configured to perform the inferencing task, wherein a current distillation iteration can involve training a new neural network of the smaller deep learning ensemble via a loss function that is based on one or more neural networks of the smaller deep learning ensemble which were trained during one or more previous distillation iterations.

INSPECTION METHOD, CLASSIFICATION METHOD, MANAGEMENT METHOD, STEEL MATERIAL PRODUCTION METHOD, LEARNING MODEL GENERATION METHOD, LEARNING MODEL, INSPECTION DEVICE, AND STEEL MATERIAL PRODUCTION EQUIPMENT

NºPublicación:  US2024282098A1 22/08/2024
Solicitante: 
JFE STEEL CORP [JP]
JFE STEEL CORPORATION
MX_2024000342_A

Resumen de: US2024282098A1

Provided are an inspection method, a classification method, a management method, a steel material production method, a learning model generation method, a learning model, an inspection device, and steel material production equipment that can both improve detection accuracy and reduce processing time. The inspection method is an inspection method of detecting surface defects on an inspection target, the inspection method including: an imaging step (S1) of acquiring an image of a surface of the inspection target; an extraction step (S3) of extracting defect candidate parts from the image; a screening step (S4) of screening the extracted defect candidate parts by a first defect determination; and an inspection step (S5) of detecting harmful or harmless surface defects by a second defect determination using a convolutional neural network, the second defect determination being targeted at defect candidate parts after the screening by the first defect determination.

SYSTEMS AND METHODS FOR LOCATION THREAT MONITORING

NºPublicación:  US2024281658A1 22/08/2024
Solicitante: 
PROOFPOINT INC [US]
Proofpoint, Inc
US_2020193284_A1

Resumen de: US2024281658A1

Disclosed is a new location threat monitoring solution that leverages deep learning (DL) to process data from data sources on the Internet, including social media and the dark web. Data containing textual information relating to a brand is fed to a DL model having a DL neural network trained to recognize or infer whether a piece of natural language input data from a data source references an address or location of interest to the brand, regardless of whether the piece of natural language input data actually contains the address or location. A DL module can determine, based on an outcome from the neural network, whether the data is to be classified for potential location threats. If so, the data is provided to location threat classifiers for identifying a location threat with respect to the address or location referenced in the data from the data source.

Vector Computation Unit in a Neural Network Processor

NºPublicación:  US2024273368A1 15/08/2024
Solicitante: 
GOOGLE LLC [US]
Google LLC
JP_2023169224_PA

Resumen de: US2024273368A1

A circuit for performing neural network computations for a neural network comprising a plurality of layers, the circuit comprising: activation circuitry configured to receive a vector of accumulated values and configured to apply a function to each accumulated value to generate a vector of activation values; and normalization circuitry coupled to the activation circuitry and configured to generate a respective normalized value from each activation value.

METHOD OF GENERATING NEGATIVE SAMPLE SET FOR PREDICTING MACROMOLECULE-MACROMOLECULE INTERACTION, METHOD OF PREDICTING MACROMOLECULE-MACROMOLECULE INTERACTION, METHOD OF TRAINING MODEL, AND NEURAL NETWORK MODEL FOR PREDICTING MACROMOLECULE-MACROMOLECULE INTERACTION

NºPublicación:  US2024273351A1 15/08/2024
Solicitante: 
BOE TECH GROUP CO LTD [CN]
BOE Technology Group Co., Ltd
CN_116686050_PA

Resumen de: US2024273351A1

A method of generating a negative sample set for predicting macromolecule-macromolecule interaction is provided. The method includes receiving a positive sample set including pairs of macromolecules of a first type and macromolecules of a second type having macromolecule-macromolecule interaction; generating a first similarity map of the macromolecules of the first type; generating a second similarity map of the macromolecules of the second type; generating vectorized representations of nodes in the first similarity map and vectorized representations of nodes in the second similarity map; and generating the negative sample set using the vectorized representations of nodes in the first similarity map and the vectorized representations of nodes in the second similarity map.

DIALOGUE TRAINING WITH RICH REFERENCE-FREE DISCRIMINATORS

NºPublicación:  US2024273369A1 15/08/2024
Solicitante: 
TENCENT AMERICA LLC [US]
TENCENT AMERICA LLC
JP_2023552137_A

Resumen de: US2024273369A1

A method of generating a neural network based open-domain dialogue model, includes receiving an input utterance from a device having a conversation with the dialogue model, obtaining a plurality of candidate replies to the input utterance from the dialogue model, determining a plurality of discriminator scores for the candidate replies based on reference-free discriminators, determining a plurality of quality score associated with the candidate replies, and training the dialogue model based on the quality scores.

SELECTING A NEURAL NETWORK ARCHITECTURE FOR A SUPERVISED MACHINE LEARNING PROBLEM

NºPublicación:  US2024273370A1 15/08/2024
Solicitante: 
MICROSOFT TECH LICENSING LLC [US]
Microsoft Technology Licensing, LLC
JP_2021523430_A

Resumen de: US2024273370A1

Systems and methods, for selecting a neural network for a machine learning (ML) problem, are disclosed. A method includes accessing an input matrix, and accessing an ML problem space associated with an ML problem and multiple untrained candidate neural networks for solving the ML problem. The method includes computing, for each untrained candidate neural network, at least one expressivity measure capturing an expressivity of the candidate neural network with respect to the ML problem. The method includes computing, for each untrained candidate neural network, at least one trainability measure capturing a trainability of the candidate neural network with respect to the ML problem. The method includes selecting, based on the at least one expressivity measure and the at least one trainability measure, at least one candidate neural network for solving the ML problem. The method includes providing an output representing the selected at least one candidate neural network.

Systems and Methods for Interaction-Based Trajectory Prediction

NºPublicación:  US2024270260A1 15/08/2024
Solicitante: 
AURORA OPERATIONS INC [US]
Aurora Operations, Inc
US_11975726_PA

Resumen de: US2024270260A1

Systems and methods for predicting interactions between objects and predicting a trajectory of an object are presented herein. A system can obtain object data associated with a first object and a second object. The object data can have position data and velocity data for the first object and the second object. Additionally, the system can process the obtained object data to generate a hybrid graph using a graph generator. The hybrid graph can have a first node indicative of the first object and a second node indicative of the second object. Moreover, the system can process, using an interaction prediction model, the generated hybrid graph to predict an interaction type between the first node and the second node. Furthermore, the system can process, using a graph neural network model, the predicted interaction type between the first node and the second node to predict a trajectory of the first object.

DISTRIBUTED DEPLOYMENT AND INFERENCE METHOD FOR DEEP SPIKING NEURAL NETWORK, AND RELATED APPARATUS

NºPublicación:  WO2024164508A1 15/08/2024
Solicitante: 
PENG CHENG LABORATORY [CN]
\u9E4F\u57CE\u5B9E\u9A8C\u5BA4
WO_2024164508_A1

Resumen de: WO2024164508A1

Disclosed in the present application are a distributed deployment and inference method for a deep spiking neural network, and a related apparatus. The method comprises: acquiring a deep spiking neural network to be deployed, and splitting the deep spiking neural network into several sub-neural networks; compiling each of the several sub-neural networks, so as to obtain a generative model file corresponding to each sub-neural network; and sequentially deploying into a brain-inspired chip generative model files corresponding to the several sub-neural networks, so as to gradually perform inference on input data by means of the several sub-neural networks to obtain output data corresponding to the input data. In the present application, a deep spiking network is split into several sub-neural networks, and the individual sub-neural networks are separately compiled and deployed, such that the deep spiking neural network can be deployed into a brain-inspired chip, and the brain-inspired chip uses a large-scale deep spiking neural network to perform model inference, thereby expanding the scope of usage of the deep spiking neural network.

PERFORMING PROCESSING-IN-MEMORY OPERATIONS RELATED TO PRE-SYNAPTIC SPIKE SIGNALS, AND RELATED METHODS AND SYSTEMS

NºPublicación:  US2024273349A1 15/08/2024
Solicitante: 
MICRON TECH INC [US]
Micron Technology, Inc
KR_20220054664_PA

Resumen de: US2024273349A1

Spiking events in a spiking neural network may be processed via a memory system. A memory system may store data corresponding to a group of destination neurons. The memory system may, at each time interval of a SNN, pass through data corresponding to a group of pre-synaptic spike events from respective source neurons. The data corresponding to the group of pre-synaptic spike events may be subsequently stored in the memory system.

ADAPTIVE VISUAL SPEECH RECOGNITION

Nº publicación: US2024265911A1 08/08/2024

Solicitante:

DEEPMIND TECH LIMITED [GB]
DeepMind Technologies Limited

JP_2024520985_PA

Resumen de: US2024265911A1

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing video data using an adaptive visual speech recognition model. One of the methods includes receiving a video that includes a plurality of video frames that depict a first speaker: obtaining a first embedding characterizing the first speaker; and processing a first input comprising (i) the video and (ii) the first embedding using a visual speech recognition neural network having a plurality of parameters, wherein the visual speech recognition neural network is configured to process the video and the first embedding in accordance with trained values of the parameters to generate a speech recognition output that defines a sequence of one or more words being spoken by the first speaker in the video.

traducir