Ministerio de Industria, Turismo y Comercio LogoMinisterior
 

Xarxes Neuronals

Resultados 30 resultados
LastUpdate Última actualización 27/09/2024 [10:05:00]
pdfxls
Solicitudes publicadas en los últimos 30 días / Applications published in the last 30 days
Resultados 1 a 25 de 30 nextPage  

ENHANCING HYBRID TRADITIONAL NEURAL NETWORKS WITH LIQUID NEURAL NETWORK UNITS FOR CYBER SECURITY AND OFFENSE PROTECTION

NºPublicación:  US2024323203A1 26/09/2024
Solicitante: 
BANK OF AMERICA CORP [US]
Bank of America Corporation
US_2023188542_PA

Resumen de: US2024323203A1

Aspects of the disclosure relate to enhancing hybrid traditional neural networks with liquid neural networks for cyber security and offense protection. A computing platform may receive a request to access enterprise organization data. The computing platform may compare the current request to previous requests to determine whether a similar request was previously processed. If a similar request was not previously processed, the computing platform may flag the request as a threat and may analyze the request. The computing platform may extract data from the request and may use the extracted data to generate rules, threat detection algorithms, and training models. The computing platform may use the rules, threat detection algorithms, and training models to train a deep learning neural network to identify and handle threats to an enterprise organization.

METHODS AND SYSTEMS FOR PERFORMING A SPARSE SUBMANIFOLD CONVOLUTION USING AN NNA

NºPublicación:  US2024320298A1 26/09/2024
Solicitante: 
IMAGINATION TECH LIMITED [GB]
Imagination Technologies Limited
EP_4435712_PA

Resumen de: US2024320298A1

Methods of implementing a sparse submanifold convolution using a neural network accelerator. The methods include: receiving, at the neural network accelerator, an input tensor in a sparse format; performing, at the neural network accelerator, for each position of a kernel of the sparse submanifold convolution, a 1×1 convolution between the received input tensor and weights of filters of the sparse submanifold convolution at that kernel position to generate a plurality of partial outputs; and combining appropriate partial outputs of the plurality of partial outputs to generate an output tensor of the sparse submanifold convolution in sparse format.

OPTICAL INFORMATION READING DEVICE

NºPublicación:  US2024320455A1 26/09/2024
Solicitante: 
KEYENCE CORP [JP]
Keyence Corporation
US_2023289545_PA

Resumen de: US2024320455A1

To suppress an increase in processing time due to a load of inference processing while improving reading accuracy by the inference processing of machine learning. An optical information reading device includes a processor including: an inference processing part that inputs a code image to a neural network and executes inference processing of generating an ideal image corresponding to the code image; and a decoding processing part that executes first decoding processing of decoding the code image and second decoding processing of decoding the ideal image generated by the inference processing part. The processor executes the inference processing and the first decoding processing in parallel, and executes the second decoding processing after completion of the inference processing.

ASSIGNING OBSTACLES TO LANES USING NEURAL NETWORKS FOR AUTONOMOUS MACHINE APPLICATIONS

NºPublicación:  US2024320986A1 26/09/2024
Solicitante: 
NVIDIA CORP [US]
NVIDIA Corporation
US_2023099494_PA

Resumen de: US2024320986A1

In various examples, live perception from sensors of an ego-machine may be leveraged to detect objects and assign the objects to bounded regions (e.g., lanes or a roadway) in an environment of the ego-machine in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute outputs—such as output segmentation masks—that may correspond to a combination of object classification and lane identifiers. The output masks may be post-processed to determine object to lane assignments that assign detected objects to lanes in order to aid an autonomous or semi-autonomous machine in a surrounding environment.

THIRD-PARTY SERVICE FOR SUGGESTING RESOURCES FOR A RECEIVED MESSAGE

NºPublicación:  US2024320686A1 26/09/2024
Solicitante: 
ASAPP INC [US]
ASAPP, INC
US_2023214847_PA

Resumen de: US2024320686A1

A third-party service may be used to assist entities in responding to requests of users by determining a suggested resource corresponding to a received communication. The third party service may receive a request from a first entity, such as via an application programming interface request, that includes a message in a conversation. A conversation feature vector may be computed by processing the message with a first neural network. A suggested resource may be determined using the conversation feature vector. The third-party service may then return the suggested resource for use in the conversation. The third-party service may similarly be used to assist other entities in responding to requests of users.

EFFICIENT HARDWARE ACCELERATOR CONFIGURATION EXPLORATION

NºPublicación:  US2024311267A1 19/09/2024
Solicitante: 
GOOGLE LLC [US]
Google LLC
CN_117396890_PA

Resumen de: US2024311267A1

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a surrogate neural network configured to determine a predicted performance measure of a hardware accelerator having a target hardware configuration on a target application. The trained instance of the surrogate neural network can be used. in addition to or in place of hardware simulation, during a search process for determining hardware configurations for application-specific hardware accelerators. i.e., hardware accelerators on which one or more neural networks can be deployed to perform one or more target machine learning tasks.

FOVEATING NEURAL NETWORK

NºPublicación:  US2024314452A1 19/09/2024
Solicitante: 
VARJO TECH OY [FI]
Varjo Technologies Oy

Resumen de: US2024314452A1

Disclosed is an imaging system with an image sensor; and at least one processor configured to obtain image data read out by the image sensor; obtain information indicative of a gaze direction of a given user; and utilise at least one neural network to perform demosaicking on an entirety of the image data; identify a gaze region and a peripheral region of the image data, based on the gaze direction of the given user; and apply at least one image restoration technique to one of the gaze region and the peripheral region of the image data.

SYSTEM AND METHOD FOR NEURAL NETWORK ORCHESTRATION

NºPublicación:  US2024312184A1 19/09/2024
Solicitante: 
VERITONE INC [US]
Veritone, Inc
US_2023377312_PA

Resumen de: US2024312184A1

Methods and systems for training one or more neural networks for transcription and for transcribing a media file using the trained one or more neural networks are provided. One of the methods includes: segmenting the media file into a plurality of segments; inputting each segment, one segment at a time, of the plurality of segments into a first neural network trained to perform speech recognition; extracting outputs, one segment at a time, from one or more layers of the first neural network; and training a second neural network to generate a predicted-WER (word error rate) of a plurality of transcription engines for each segment based at least on outputs from the one or more layers of the first neural network.

SPARSITY-AWARE DATASTORE FOR INFERENCE PROCESSING IN DEEP NEURAL NETWORK ARCHITECTURES

NºPublicación:  EP4430525A1 18/09/2024
Solicitante: 
INTEL CORP [US]
Intel Corporation
CN_117597691_PA

Resumen de: CN117597691A

Systems, apparatuses, and methods may provide techniques to prefetch compressed data and a sparsity bitmap from a memory to store the compressed data in a decode buffer, where the compressed data is associated with a plurality of tensors, where the compressed data is in a compressed format. The technique aligns the compressed data with the sparsity bitmap to generate decoded data, and provides the decoded data to a plurality of processing elements.

INCORPORATING STRUCTURED KNOWLEDGE IN NEURAL NETWORKS

NºPublicación:  WO2024186551A1 12/09/2024
Solicitante: 
MICROSOFT TECHNOLOGY LICENSING LLC [US]
MICROSOFT TECHNOLOGY LICENSING, LLC
WO_2024186551_PA

Resumen de: WO2024186551A1

An approach to structured knowledge modeling and the incorporation of learned knowledge in neural networks is disclosed. Knowledge is encoded in a knowledge base (KB) in a manner that is explicit and structured, such that it is human-interpretable, verifiable, and editable. Another neural network is able to read from and/or write to the knowledge model based on structured queries. The knowledge model has an interpretable property name-value structure, represented using property name embedding vectors and property value embedding vectors, such that an interpretable, structured query on the knowledge base may be formulated by a neural model in terms of tensor operations. The knowledge base admits gradient-based training or updates (of the knowledge base itself and/or a neural network(s) supported by the knowledge base), allowing knowledge or knowledge representations to be inferred from a training set using machine learning training methods.

METHOD AND APPARATUS WITH CONVOLUTION NEURAL NETWORK PROCESSING

NºPublicación:  US2024303837A1 12/09/2024
Solicitante: 
SAMSUNG ELECTRONICS CO LTD [KR]
Samsung Electronics Co., Ltd
JP_2020126651_A

Resumen de: US2024303837A1

A neural network apparatus includes one or more processors comprising: a controller configured to determine a shared operand to be shared in parallelized operations as being either one of a pixel value among pixel values of an input feature map and a weight value among weight values of a kernel, based on either one or both of a feature of the input feature map and a feature of the kernel; and one or more processing units configured to perform the parallelized operations based on the determined shared operand.

System and Method for Preventing Attacks on a Machine Learning Model Based on an Internal Sate of the Model

NºPublicación:  US2024303328A1 12/09/2024
Solicitante: 
IRDETO B V [NL]
Irdeto B.V
CN_118627585_PA

Resumen de: US2024303328A1

Disclosed implementations include a method of detecting attacks on Machine Learning (ML) models by applying the concept of anomaly detection based on the internal state of the model being protected. Instead of looking at the input or output data directly, disclosed implementation look at the internal state of the hidden layers of a neural network of the model after processing of data. By examining how different layers within a neural network model are behaving an inference can be made as to whether the data that produced the observed state is anomalous (and thus possibly part of an attack on the model).

Stateful training of machine learning models in wireless networking

NºPublicación:  GB2627937A 11/09/2024
Solicitante: 
NOKIA TECHNOLOGIES OY [FI]
Nokia Technologies Oy
GB_2627937_A

Resumen de: GB2627937A

Methods for retraining only an adaptive portion 330 of a partially-frozen machine learning model 300, such as a neural network or decision tree. In a first method, user equipment apparatus (such as a smartphone) receives a freeze-to-adaptive ratio from network apparatus (such as a server). Based on the freeze-to-adaptive ratio, the user equipment apparatus determines an adaptive portion of the machine learning model to train and a frozen portion 320 of the machine learning model to not train. The user equipment apparatus retrains the adaptive portion of the machine learning model to produce a retrained model. The user equipment apparatus then selects either the retrained model or the original model based on respective performance measures of the two models, such as accuracy or user feedback. In an alternative method, network apparatus accesses a machine learning model comprising a frozen portion and an adaptive portion. The network apparatus generates a backup 340 of the adaptive portion of the machine learning model, retrains the adaptive portion to provide a retrained adaptive portion, and transmits parameters of the retrained adaptive portion to user equipment apparatus. The methods may be used for stateful training of machine learning models in wireless networking.

FORMULATION GRAPH FOR MACHINE LEARNING OF CHEMICAL PRODUCTS

NºPublicación:  WO2024182178A1 06/09/2024
Solicitante: 
DOW GLOBAL TECH LLC [US]
DOW GLOBAL TECHNOLOGIES LLC
US_2024290440_A1

Resumen de: WO2024182178A1

Chemical formulations for chemical products can be represented by digital formulation graphs for use in machine learning models. The digital formulation graphs can be input to graph-based algorithms such as graph neural networks to produce a feature vector, which is a denser description of the chemical product than the digital formulation graph. The feature vector can be input to a supervised machine learning model to predict one or more attribute values of the chemical product that would be produced by the formulation without actually having to go through the production process. The feature vector can be input to an unsupervised machine learning model trained to compare chemical products based on feature vectors of the chemical products. The unsupervised machine learning model can recommend a substitute chemical product based on the comparison.

METHOD FOR ANALYSING A NOISY SOUND SIGNAL FOR THE RECOGNITION OF CONTROL KEYWORDS AND OF A SPEAKER OF THE ANALYSED NOISY SOUND SIGNAL

NºPublicación:  US2024296859A1 05/09/2024
Solicitante: 
CENTRE NATIONAL DE LA RECHERCHE SCIENT [FR]
UNIV DE MONTPELLIER [FR]
CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE,
UNIVERSIT\u00C9 DE MONTPELLIER
WO_2023057384_PA

Resumen de: US2024296859A1

A method for analysing a noisy sound signal for the recognition of at least one group of control keywords and of a speaker of the analysed noisy sound signal, the noisy sound signal being recorded by a microphone and the method including: supervised training of an artificial neural network using a training database in order to obtain a trained artificial neural network capable of providing, based on a sound signature obtained from a noisy sound signal, a prediction of the speaker and at least one prediction of a group of control keywords, the training database including a plurality of sound signatures, each associated with a speaker and with at least one group of control keywords; calculating a sound signature of the analysed noisy sound signal; using the trained artificial neural network on the calculated sound signature in order to obtain a prediction of the speaker and at least one prediction of a group of control keywords.

INCORPORATING STRUCTURED KNOWLEDGE IN NEURAL NETWORKS

NºPublicación:  US2024296309A1 05/09/2024
Solicitante: 
MICROSOFT TECHNOLOGY LICENSING LLC [US]
Microsoft Technology Licensing, LLC
US_2024296309_PA

Resumen de: US2024296309A1

An approach to structured knowledge modeling and the incorporation of learned knowledge in neural networks is disclosed. Knowledge is encoded in a knowledge base (KB) in a manner that is explicit and structured, such that it is human-interpretable, verifiable, and editable. Another neural network is able to read from and/or write to the knowledge model based on structured queries. The knowledge model has an interpretable property name-value structure, represented using property name embedding vectors and property value embedding vectors, such that an interpretable, structured query on the knowledge base may be formulated by a neural model in terms of tensor operations. The knowledge base admits gradient-based training or updates (of the knowledge base itself and/or a neural network(s) supported by the knowledge base), allowing knowledge or knowledge representations to be inferred from a training set using machine learning training methods.

SELECTION-INFERENCE NEURAL NETWORK SYSTEMS

NºPublicación:  AU2023267975A1 05/09/2024
Solicitante: 
DEEPMIND TECH LIMITED
DEEPMIND TECHNOLOGIES LIMITED
AU_2023267975_PA

Resumen de: AU2023267975A1

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating a response to a query input using a selection- inference neural network.

GRAPH NEURAL NETWORK GENERATION METHOD, APPARATUS AND SYSTEM, MEDIUM AND ELECTRONIC DEVICE

NºPublicación:  EP4425353A2 04/09/2024
Solicitante: 
LEMON INC [KY]
Lemon Inc
EP_4425353_A2

Resumen de: EP4425353A2

The disclosure relates to a method, apparatus, system, medium and electronic device for graph neural network generation. The method includes: obtaining a subgraph structure, the subgraph structure being configured to reflect a graph structure of a corresponding subgraph, and the subgraph comprising a plurality of nodes and edges between the plurality of nodes; obtaining, based on the subgraph structure and according to a predetermined priority, node features of the plurality of nodes and edge features of the edges from a plurality of memories; the predetermined priority being obtained by sorting the plurality of memories in accordance with memory size in an ascending order; fusing, based on the subgraph structure, the node features of the plurality of nodes and the edge features of the edges to obtain subgraph data; and training, based on the subgraph data, the graph neural network. The method for graph neural network generation of the present disclosure may improve the efficiency of graph neural network training.

METHODS AND SYSTEMS FOR PERFORMING A SPARSE SUBMANIFOLD CONVOLUTION USING AN NNA

NºPublicación:  EP4425418A1 04/09/2024
Solicitante: 
IMAGINATION TECH LTD [GB]
Imagination Technologies Limited
EP_4425418_PA

Resumen de: EP4425418A1

Methods of implementing a sparse submanifold convolution using a neural network accelerator. The methods include: receiving, at the neural network accelerator, an input tensor in a sparse format; performing, at the neural network accelerator, for each position of a kernel of the sparse submanifold convolution, a 1x1 convolution between the received input tensor and weights of filters of the sparse submanifold convolution at that kernel position to generate a plurality of partial outputs; and combining appropriate partial outputs of the plurality of partial outputs to generate an output tensor of the sparse submanifold convolution in sparse format.

Methods and systems for performing a standard deconvolution on a GPU

NºPublicación:  GB2627806A 04/09/2024
Solicitante: 
IMAGINATION TECH LTD [GB]
Imagination Technologies Limited
GB_2627806_A

Resumen de: GB2627806A

A method of implementing a deconvolution between an input tensor and a filter, the method comprising: receiving the input tensor (3702, fig 37) in dense format, identifying active positions of the input tensor, generating, using an indexed-unfold operation, an input matrix (3704, fig 37) comprising elements of the input tensor in each non-zero sub-window (3706, fig 37) of the input tensor, and multiplying the input matrix with a weight matrix (3802, fig 38) to generate an output matrix (3804, fig 38). The method may further comprise generating, using an indexed fold operation, the output tensor (3904, fig 39) by identifying, based on the non-zero sub-windows, the position in the output tensor of each element in the output matrix, and placing the elements into the corresponding positions in the output tensor. Also disclosed is a method of performing a convolution on the output tensor using a neural network accelerator to generate partial outputs which are combined to generate an output tensor.

Methods and systems for performing a sparse submanifold convolution on a GPU

NºPublicación:  GB2627804A 04/09/2024
Solicitante: 
IMAGINATION TECH LTD [GB]
Imagination Technologies Limited
GB_2627804_A

Resumen de: GB2627804A

A method of implementing a sparse submanifold convolution comprising: receiving an input tensor (804, fig 8) in dense format, identifying active positions (608, fig 8) of the input tensor, generating, using an indexed-unfold operation, an input matrix (802, fig 8) comprising elements of the input tensor in each active window (804-806, fig 8) of the input tensor, and performing multiplication the input matrix with a weight matrix (1002, fig 10) to generate an output matrix (1004, fig 10). The method may further comprise generating, using an indexed fold operation, the output tensor (1102, fig 11) by identifying, based on the active windows, the position in the output tensor of each element in the output matrix, and placing the elements into the corresponding positions in the output tensor. Also disclosed is a method of performing a convolution on the output tensor using a neural network accelerator to generate partial outputs which are combined to generate an output tensor.

Methods and systems for performing a standard convolution on a GPU

NºPublicación:  GB2627805A 04/09/2024
Solicitante: 
IMAGINATION TECH LTD [GB]
Imagination Technologies Limited
GB_2627805_A

Resumen de: GB2627805A

A method of implementing a convolution comprising: receiving an input tensor (2402, fig 24) in dense format; identifying active positions (2406, fig 24) of the input tensor; generating, using an indexed unfold operation, an input matrix (2404, fig 24), comprising elements of the input tensor in each non-zero window (2408, fig 24) of the input tensor; and multiplying the input matrix with a weight matrix (2502, fig 25) to generate an output matrix (2504, fig 25). The method may further comprise generating, using an indexed fold operation, the output tensor (2602, fig 26) by identifying, based on the non-zero windows, the position in the output tensor of each element in the output matrix, and placing the elements into the corresponding positions in the output tensor. Also disclosed is a method of performing a convolution on the output tensor using a neural network accelerator to generate partial outputs which are combined to generate an output tensor.

Methods and systems for performing a sparse submanifold deconvolution on a GPU

NºPublicación:  GB2627807A 04/09/2024
Solicitante: 
IMAGINATION TECH LTD [GB]
Imagination Technologies Limited
GB_2627807_A

Resumen de: GB2627807A

A method of implementing a sparse submanifold deconvolution between an input tensor and a filter, representable as a direct convolution between an input tensor and a plurality of sub-filters comprising weights of the filter, the method comprising: receiving the input tensor (4104, fig 41) in dense format, identifying active positions of the output tensor (4102, fig 41), generating, using an indexed-unfold operation, an input matrix (4106, fig 41) comprising elements of the input tensor in each non-zero sub-window (4108, fig 41) relevant to the active positions of the output tensor, and multiplying the input matrix with a weight matrix (4202, fig 42) comprising the sub-filters to generate an output matrix (4204, 42). The method may further comprise generating, using an indexed fold operation, the output tensor (4302, fig 43) by identifying, based on the active positions, the position in the output tensor of each element in the output matrix, and placing the elements into the corresponding positions in the output tensor. Also disclosed is a method of performing a convolution on the output tensor using a neural network accelerator to generate partial outputs which are combined to generate an output tensor.

TRANSFORMER NEURAL NETWORK IN MEMORY

NºPublicación:  US2024289597A1 29/08/2024
Solicitante: 
MICRON TECH INC [US]
Micron Technology, Inc
CN_116235186_PA

Resumen de: US2024289597A1

Apparatuses and methods can be related to implementing a transformer neural network in a memory. A transformer neural network can be implemented utilizing a resistive memory array. The memory array can comprise programmable memory cells that can be programed and used to store weights of the transformer neural network and perform operations consistent with the transformer neural network.

EXPLOITING INPUT DATA SPARSITY IN NEURAL NETWORK COMPUTE UNITS

Nº publicación: US2024289285A1 29/08/2024

Solicitante:

GOOGLE LLC [US]
Google LLC

KR_20240105502_A

Resumen de: US2024289285A1

A computer-implemented method includes receiving, by a computing device, input activations and determining, by a controller of the computing device, whether each of the input activations has either a zero value or a non-zero value. The method further includes storing, in a memory bank of the computing device, at least one of the input activations. Storing the at least one input activation includes generating an index comprising one or more memory address locations that have input activation values that are non-zero values. The method still further includes providing, by the controller and from the memory bank, at least one input activation onto a data bus that is accessible by one or more units of a computational array. The activations are provided, at least in part, from a memory address location associated with the index.

traducir