Ministerio de Industria, Turismo y Comercio LogoMinisterior
 

Sare neuronalak

Resultados 33 resultados
LastUpdate Última actualización 20/09/2024 [08:22:00]
pdfxls
Solicitudes publicadas en los últimos 30 días / Applications published in the last 30 days
Resultados 1 a 25 de 33 nextPage  

FOVEATING NEURAL NETWORK

NºPublicación:  US2024314452A1 19/09/2024
Solicitante: 
VARJO TECH OY [FI]
Varjo Technologies Oy

Resumen de: US2024314452A1

Disclosed is an imaging system with an image sensor; and at least one processor configured to obtain image data read out by the image sensor; obtain information indicative of a gaze direction of a given user; and utilise at least one neural network to perform demosaicking on an entirety of the image data; identify a gaze region and a peripheral region of the image data, based on the gaze direction of the given user; and apply at least one image restoration technique to one of the gaze region and the peripheral region of the image data.

SYSTEM AND METHOD FOR NEURAL NETWORK ORCHESTRATION

NºPublicación:  US2024312184A1 19/09/2024
Solicitante: 
VERITONE INC [US]
Veritone, Inc
US_2023377312_PA

Resumen de: US2024312184A1

Methods and systems for training one or more neural networks for transcription and for transcribing a media file using the trained one or more neural networks are provided. One of the methods includes: segmenting the media file into a plurality of segments; inputting each segment, one segment at a time, of the plurality of segments into a first neural network trained to perform speech recognition; extracting outputs, one segment at a time, from one or more layers of the first neural network; and training a second neural network to generate a predicted-WER (word error rate) of a plurality of transcription engines for each segment based at least on outputs from the one or more layers of the first neural network.

EFFICIENT HARDWARE ACCELERATOR CONFIGURATION EXPLORATION

NºPublicación:  US2024311267A1 19/09/2024
Solicitante: 
GOOGLE LLC [US]
Google LLC
CN_117396890_PA

Resumen de: US2024311267A1

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a surrogate neural network configured to determine a predicted performance measure of a hardware accelerator having a target hardware configuration on a target application. The trained instance of the surrogate neural network can be used. in addition to or in place of hardware simulation, during a search process for determining hardware configurations for application-specific hardware accelerators. i.e., hardware accelerators on which one or more neural networks can be deployed to perform one or more target machine learning tasks.

SPARSITY-AWARE DATASTORE FOR INFERENCE PROCESSING IN DEEP NEURAL NETWORK ARCHITECTURES

NºPublicación:  EP4430525A1 18/09/2024
Solicitante: 
INTEL CORP [US]
Intel Corporation
CN_117597691_PA

Resumen de: CN117597691A

Systems, apparatuses, and methods may provide techniques to prefetch compressed data and a sparsity bitmap from a memory to store the compressed data in a decode buffer, where the compressed data is associated with a plurality of tensors, where the compressed data is in a compressed format. The technique aligns the compressed data with the sparsity bitmap to generate decoded data, and provides the decoded data to a plurality of processing elements.

INCORPORATING STRUCTURED KNOWLEDGE IN NEURAL NETWORKS

NºPublicación:  WO2024186551A1 12/09/2024
Solicitante: 
MICROSOFT TECHNOLOGY LICENSING LLC [US]
MICROSOFT TECHNOLOGY LICENSING, LLC
WO_2024186551_PA

Resumen de: WO2024186551A1

An approach to structured knowledge modeling and the incorporation of learned knowledge in neural networks is disclosed. Knowledge is encoded in a knowledge base (KB) in a manner that is explicit and structured, such that it is human-interpretable, verifiable, and editable. Another neural network is able to read from and/or write to the knowledge model based on structured queries. The knowledge model has an interpretable property name-value structure, represented using property name embedding vectors and property value embedding vectors, such that an interpretable, structured query on the knowledge base may be formulated by a neural model in terms of tensor operations. The knowledge base admits gradient-based training or updates (of the knowledge base itself and/or a neural network(s) supported by the knowledge base), allowing knowledge or knowledge representations to be inferred from a training set using machine learning training methods.

System and Method for Preventing Attacks on a Machine Learning Model Based on an Internal Sate of the Model

NºPublicación:  US2024303328A1 12/09/2024
Solicitante: 
IRDETO B V [NL]
Irdeto B.V
CN_118627585_PA

Resumen de: US2024303328A1

Disclosed implementations include a method of detecting attacks on Machine Learning (ML) models by applying the concept of anomaly detection based on the internal state of the model being protected. Instead of looking at the input or output data directly, disclosed implementation look at the internal state of the hidden layers of a neural network of the model after processing of data. By examining how different layers within a neural network model are behaving an inference can be made as to whether the data that produced the observed state is anomalous (and thus possibly part of an attack on the model).

METHOD AND APPARATUS WITH CONVOLUTION NEURAL NETWORK PROCESSING

NºPublicación:  US2024303837A1 12/09/2024
Solicitante: 
SAMSUNG ELECTRONICS CO LTD [KR]
Samsung Electronics Co., Ltd
JP_2020126651_A

Resumen de: US2024303837A1

A neural network apparatus includes one or more processors comprising: a controller configured to determine a shared operand to be shared in parallelized operations as being either one of a pixel value among pixel values of an input feature map and a weight value among weight values of a kernel, based on either one or both of a feature of the input feature map and a feature of the kernel; and one or more processing units configured to perform the parallelized operations based on the determined shared operand.

Stateful training of machine learning models in wireless networking

NºPublicación:  GB2627937A 11/09/2024
Solicitante: 
NOKIA TECHNOLOGIES OY [FI]
Nokia Technologies Oy
GB_2627937_A

Resumen de: GB2627937A

Methods for retraining only an adaptive portion 330 of a partially-frozen machine learning model 300, such as a neural network or decision tree. In a first method, user equipment apparatus (such as a smartphone) receives a freeze-to-adaptive ratio from network apparatus (such as a server). Based on the freeze-to-adaptive ratio, the user equipment apparatus determines an adaptive portion of the machine learning model to train and a frozen portion 320 of the machine learning model to not train. The user equipment apparatus retrains the adaptive portion of the machine learning model to produce a retrained model. The user equipment apparatus then selects either the retrained model or the original model based on respective performance measures of the two models, such as accuracy or user feedback. In an alternative method, network apparatus accesses a machine learning model comprising a frozen portion and an adaptive portion. The network apparatus generates a backup 340 of the adaptive portion of the machine learning model, retrains the adaptive portion to provide a retrained adaptive portion, and transmits parameters of the retrained adaptive portion to user equipment apparatus. The methods may be used for stateful training of machine learning models in wireless networking.

FORMULATION GRAPH FOR MACHINE LEARNING OF CHEMICAL PRODUCTS

NºPublicación:  WO2024182178A1 06/09/2024
Solicitante: 
DOW GLOBAL TECH LLC [US]
DOW GLOBAL TECHNOLOGIES LLC
US_2024290440_A1

Resumen de: WO2024182178A1

Chemical formulations for chemical products can be represented by digital formulation graphs for use in machine learning models. The digital formulation graphs can be input to graph-based algorithms such as graph neural networks to produce a feature vector, which is a denser description of the chemical product than the digital formulation graph. The feature vector can be input to a supervised machine learning model to predict one or more attribute values of the chemical product that would be produced by the formulation without actually having to go through the production process. The feature vector can be input to an unsupervised machine learning model trained to compare chemical products based on feature vectors of the chemical products. The unsupervised machine learning model can recommend a substitute chemical product based on the comparison.

SELECTION-INFERENCE NEURAL NETWORK SYSTEMS

NºPublicación:  AU2023267975A1 05/09/2024
Solicitante: 
DEEPMIND TECH LIMITED
DEEPMIND TECHNOLOGIES LIMITED
AU_2023267975_PA

Resumen de: AU2023267975A1

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating a response to a query input using a selection- inference neural network.

METHOD FOR ANALYSING A NOISY SOUND SIGNAL FOR THE RECOGNITION OF CONTROL KEYWORDS AND OF A SPEAKER OF THE ANALYSED NOISY SOUND SIGNAL

NºPublicación:  US2024296859A1 05/09/2024
Solicitante: 
CENTRE NATIONAL DE LA RECHERCHE SCIENT [FR]
UNIV DE MONTPELLIER [FR]
CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE,
UNIVERSIT\u00C9 DE MONTPELLIER
WO_2023057384_PA

Resumen de: US2024296859A1

A method for analysing a noisy sound signal for the recognition of at least one group of control keywords and of a speaker of the analysed noisy sound signal, the noisy sound signal being recorded by a microphone and the method including: supervised training of an artificial neural network using a training database in order to obtain a trained artificial neural network capable of providing, based on a sound signature obtained from a noisy sound signal, a prediction of the speaker and at least one prediction of a group of control keywords, the training database including a plurality of sound signatures, each associated with a speaker and with at least one group of control keywords; calculating a sound signature of the analysed noisy sound signal; using the trained artificial neural network on the calculated sound signature in order to obtain a prediction of the speaker and at least one prediction of a group of control keywords.

Methods and systems for performing a standard convolution on a GPU

NºPublicación:  GB2627805A 04/09/2024
Solicitante: 
IMAGINATION TECH LTD [GB]
Imagination Technologies Limited
GB_2627805_A

Resumen de: GB2627805A

A method of implementing a convolution comprising: receiving an input tensor (2402, fig 24) in dense format; identifying active positions (2406, fig 24) of the input tensor; generating, using an indexed unfold operation, an input matrix (2404, fig 24), comprising elements of the input tensor in each non-zero window (2408, fig 24) of the input tensor; and multiplying the input matrix with a weight matrix (2502, fig 25) to generate an output matrix (2504, fig 25). The method may further comprise generating, using an indexed fold operation, the output tensor (2602, fig 26) by identifying, based on the non-zero windows, the position in the output tensor of each element in the output matrix, and placing the elements into the corresponding positions in the output tensor. Also disclosed is a method of performing a convolution on the output tensor using a neural network accelerator to generate partial outputs which are combined to generate an output tensor.

Methods and systems for performing a sparse submanifold deconvolution on a GPU

NºPublicación:  GB2627807A 04/09/2024
Solicitante: 
IMAGINATION TECH LTD [GB]
Imagination Technologies Limited
GB_2627807_A

Resumen de: GB2627807A

A method of implementing a sparse submanifold deconvolution between an input tensor and a filter, representable as a direct convolution between an input tensor and a plurality of sub-filters comprising weights of the filter, the method comprising: receiving the input tensor (4104, fig 41) in dense format, identifying active positions of the output tensor (4102, fig 41), generating, using an indexed-unfold operation, an input matrix (4106, fig 41) comprising elements of the input tensor in each non-zero sub-window (4108, fig 41) relevant to the active positions of the output tensor, and multiplying the input matrix with a weight matrix (4202, fig 42) comprising the sub-filters to generate an output matrix (4204, 42). The method may further comprise generating, using an indexed fold operation, the output tensor (4302, fig 43) by identifying, based on the active positions, the position in the output tensor of each element in the output matrix, and placing the elements into the corresponding positions in the output tensor. Also disclosed is a method of performing a convolution on the output tensor using a neural network accelerator to generate partial outputs which are combined to generate an output tensor.

METHODS AND SYSTEMS FOR PERFORMING A SPARSE SUBMANIFOLD CONVOLUTION USING AN NNA

NºPublicación:  EP4425418A1 04/09/2024
Solicitante: 
IMAGINATION TECH LTD [GB]
Imagination Technologies Limited
EP_4425418_PA

Resumen de: EP4425418A1

Methods of implementing a sparse submanifold convolution using a neural network accelerator. The methods include: receiving, at the neural network accelerator, an input tensor in a sparse format; performing, at the neural network accelerator, for each position of a kernel of the sparse submanifold convolution, a 1x1 convolution between the received input tensor and weights of filters of the sparse submanifold convolution at that kernel position to generate a plurality of partial outputs; and combining appropriate partial outputs of the plurality of partial outputs to generate an output tensor of the sparse submanifold convolution in sparse format.

Methods and systems for performing a sparse submanifold convolution on a GPU

NºPublicación:  GB2627804A 04/09/2024
Solicitante: 
IMAGINATION TECH LTD [GB]
Imagination Technologies Limited
GB_2627804_A

Resumen de: GB2627804A

A method of implementing a sparse submanifold convolution comprising: receiving an input tensor (804, fig 8) in dense format, identifying active positions (608, fig 8) of the input tensor, generating, using an indexed-unfold operation, an input matrix (802, fig 8) comprising elements of the input tensor in each active window (804-806, fig 8) of the input tensor, and performing multiplication the input matrix with a weight matrix (1002, fig 10) to generate an output matrix (1004, fig 10). The method may further comprise generating, using an indexed fold operation, the output tensor (1102, fig 11) by identifying, based on the active windows, the position in the output tensor of each element in the output matrix, and placing the elements into the corresponding positions in the output tensor. Also disclosed is a method of performing a convolution on the output tensor using a neural network accelerator to generate partial outputs which are combined to generate an output tensor.

GRAPH NEURAL NETWORK GENERATION METHOD, APPARATUS AND SYSTEM, MEDIUM AND ELECTRONIC DEVICE

NºPublicación:  EP4425353A2 04/09/2024
Solicitante: 
LEMON INC [KY]
Lemon Inc
EP_4425353_A2

Resumen de: EP4425353A2

The disclosure relates to a method, apparatus, system, medium and electronic device for graph neural network generation. The method includes: obtaining a subgraph structure, the subgraph structure being configured to reflect a graph structure of a corresponding subgraph, and the subgraph comprising a plurality of nodes and edges between the plurality of nodes; obtaining, based on the subgraph structure and according to a predetermined priority, node features of the plurality of nodes and edge features of the edges from a plurality of memories; the predetermined priority being obtained by sorting the plurality of memories in accordance with memory size in an ascending order; fusing, based on the subgraph structure, the node features of the plurality of nodes and the edge features of the edges to obtain subgraph data; and training, based on the subgraph data, the graph neural network. The method for graph neural network generation of the present disclosure may improve the efficiency of graph neural network training.

Methods and systems for performing a standard deconvolution on a GPU

NºPublicación:  GB2627806A 04/09/2024
Solicitante: 
IMAGINATION TECH LTD [GB]
Imagination Technologies Limited
GB_2627806_A

Resumen de: GB2627806A

A method of implementing a deconvolution between an input tensor and a filter, the method comprising: receiving the input tensor (3702, fig 37) in dense format, identifying active positions of the input tensor, generating, using an indexed-unfold operation, an input matrix (3704, fig 37) comprising elements of the input tensor in each non-zero sub-window (3706, fig 37) of the input tensor, and multiplying the input matrix with a weight matrix (3802, fig 38) to generate an output matrix (3804, fig 38). The method may further comprise generating, using an indexed fold operation, the output tensor (3904, fig 39) by identifying, based on the non-zero sub-windows, the position in the output tensor of each element in the output matrix, and placing the elements into the corresponding positions in the output tensor. Also disclosed is a method of performing a convolution on the output tensor using a neural network accelerator to generate partial outputs which are combined to generate an output tensor.

EXPLOITING INPUT DATA SPARSITY IN NEURAL NETWORK COMPUTE UNITS

NºPublicación:  US2024289285A1 29/08/2024
Solicitante: 
GOOGLE LLC [US]
Google LLC
KR_20240105502_A

Resumen de: US2024289285A1

A computer-implemented method includes receiving, by a computing device, input activations and determining, by a controller of the computing device, whether each of the input activations has either a zero value or a non-zero value. The method further includes storing, in a memory bank of the computing device, at least one of the input activations. Storing the at least one input activation includes generating an index comprising one or more memory address locations that have input activation values that are non-zero values. The method still further includes providing, by the controller and from the memory bank, at least one input activation onto a data bus that is accessible by one or more units of a computational array. The activations are provided, at least in part, from a memory address location associated with the index.

TRANSFORMER NEURAL NETWORK IN MEMORY

NºPublicación:  US2024289597A1 29/08/2024
Solicitante: 
MICRON TECH INC [US]
Micron Technology, Inc
CN_116235186_PA

Resumen de: US2024289597A1

Apparatuses and methods can be related to implementing a transformer neural network in a memory. A transformer neural network can be implemented utilizing a resistive memory array. The memory array can comprise programmable memory cells that can be programed and used to store weights of the transformer neural network and perform operations consistent with the transformer neural network.

FORMULATION GRAPH FOR MACHINE LEARNING OF CHEMICAL PRODUCTS

NºPublicación:  US2024290440A1 29/08/2024
Solicitante: 
DOW GLOBAL TECH LLC [US]
Dow Global Technologies LLC
US_11862300_PA

Resumen de: US2024290440A1

Chemical formulations for chemical products can be represented by digital formulation graphs for use in machine learning models. The digital formulation graphs can be input to graph-based algorithms such as graph neural networks to produce a feature vector, which is a denser description of the chemical product than the digital formulation graph. The feature vector can be input to a supervised machine learning model to predict one or more attribute values of the chemical product that would be produced by the formulation without actually having to go through the production process. The feature vector can be input to an unsupervised machine learning model trained to compare chemical products based on feature vectors of the chemical products. The unsupervised machine learning model can recommend a substitute chemical product based on the comparison.

METHOD AND APPARATUS FOR OPTIMIZING INFERENCE OF DEEP NEURAL NETWORKS

NºPublicación:  US2024289612A1 29/08/2024
Solicitante: 
INTEL CORP [US]
Intel Corporation
CN_117396889_PA

Resumen de: US2024289612A1

The application provides a hardware-aware cost model for optimizing inference of a deep neural network (DNN) comprising: a computation cost estimator configured to compute estimated computation cost based on input tensor, weight tensor and output tensor from the DNN; and a memory/cache cost estimator configured to perform memory/cache cost estimation strategy based on hardware specifications, wherein the hardware-aware cost model is used to perform performance simulation on target hardware to provide dynamic quantization knobs to quantization as required for converting a conventional precision inference model to an optimized inference model based on the result of the performance simulation.

ALWAYS-ON NEUROMORPHIC AUDIO PROCESSING MODULES AND METHODS

NºPublicación:  WO2024175770A1 29/08/2024
Solicitante: 
INNATERA NANOSYSTEMS B V [NL]
INNATERA NANOSYSTEMS B.V
WO_2024175770_A1

Resumen de: WO2024175770A1

Always-on audio processing requires low-power embedded system implementations. A neuromorphic approach to audio processing utilizing spiking neural networks implemented on a dedicated spiking neural network accelerator allows low-power realizations of typical audio pipelines. This disclosure describes modules and methods to implement such an embedded pipeline.

TRAINING APPARATUS AND METHOD FOR IMPLEMENTING EDGE ARTIFICIAL INTELLIGENCE DEVICE BY USING RESISTIVE ELEMENTS, AND ANALYSIS APPARATUS AND METHOD USING SAME

NºPublicación:  EP4421688A1 28/08/2024
Solicitante: 
POSTECH ACAD IND FOUND [KR]
Postech Academy-Industry Foundation
EP_4421688_A1

Resumen de: EP4421688A1

A learning apparatus and method for implementing an edge device using a resistive element and an analysis apparatus and method using the same are disclosed. The learning apparatus for implementing an edge device using a resistive element according to an embodiment of the present application may include a first learning unit determining a weight of an artificial neural network through learning based on first training data and reflecting the determined weight in a first resistive element, and a second learning unit updating the weight of the artificial neural network through learning based on second training data collected through the device and reflecting the updated weight in the second resistive element.

PROCESSING SEQUENCES USING CONVOLUTIONAL NEURAL NETWORKS

NºPublicación:  EP4421686A2 28/08/2024
Solicitante: 
DEEPMIND TECH LTD [GB]
DeepMind Technologies Limited
EP_4421686_A2

Resumen de: EP4421686A2

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating of generating a neural network output from an input sequence of audio data. One of the methods includes, for each of the inputs, providing a current input sequence that comprises the input and the inputs preceding the input in the input sequence to a convolutional subnetwork comprising a plurality of dilated convolutional neural network layers, wherein the convolutional subnetwork is configured to, for each of the plurality of inputs: receive the current input sequence for the input, and process the current input sequence to generate an alternative representation for the input; and providing the alternative representations to an output subnetwork, wherein the output subnetwork is configured to receive the alternative representations and to process the alternative representations to generate the neural network output. The method performs an audio data processing task.

DISTILLATION OF DEEP ENSEMBLES

Nº publicación: US2024281649A1 22/08/2024

Solicitante:

GE PREC HEALTHCARE LLC [US]
GE Precision Healthcare LLC

Resumen de: US2024281649A1

Systems/techniques that facilitate improved distillation of deep ensembles are provided. In various embodiments, a system can access a deep learning ensemble configured to perform an inferencing task. In various aspects, the system can iteratively distill the deep learning ensemble into a smaller deep learning ensemble configured to perform the inferencing task, wherein a current distillation iteration can involve training a new neural network of the smaller deep learning ensemble via a loss function that is based on one or more neural networks of the smaller deep learning ensemble which were trained during one or more previous distillation iterations.

traducir