Ministerio de Industria, Turismo y Comercio LogoMinisterior
 

Xarxes Neuronals

Resultados 36 resultados
LastUpdate Última actualización 20/12/2024 [10:11:00]
pdfxls
Solicitudes publicadas en los últimos 30 días / Applications published in the last 30 days
Resultados 1 a 25 de 36 nextPage  

SYSTEM AND METHOD FOR OPTIMIZING NON-LINEAR CONSTRAINTS OF AN INDUSTRIAL PROCESS UNIT

NºPublicación:  US2024419136A1 19/12/2024
Solicitante: 
JIO PLATFORMS LTD [IN]
JIO PLATFORMS LIMITED
WO_2023073655_PA

Resumen de: US2024419136A1

The present invention provides a robust and effective solution to an entity or an organization by enabling them to implement a system for facilitating creation of a digital twin of a process unit which can perform constrained optimization of control parameters to minimize or maximize an objective function. The system can capture non-linearities of the industrial process while the current Industrial Process models try to approximate non-linear process using linear approximation, which are not as accurate as Neural Networks. The proposed system can further create an end-to-end differentiable digital twin model of a process unit, and uses gradient flows for optimization as compared to other digital twin models that are gradient-free.

TWO-PASS END TO END SPEECH RECOGNITION

NºPublicación:  US2024420687A1 19/12/2024
Solicitante: 
GOOGLE LLC [US]
GOOGLE LLC
US_2022238101_A1

Resumen de: US2024420687A1

Two-pass automatic speech recognition (ASR) models can be used to perform streaming on-device ASR to generate a text representation of an utterance captured in audio data. Various implementations include a first-pass portion of the ASR model used to generate streaming candidate recognition(s) of an utterance captured in audio data. For example, the first-pass portion can include a recurrent neural network transformer (RNN-T) decoder. Various implementations include a second-pass portion of the ASR model used to revise the streaming candidate recognition(s) of the utterance and generate a text representation of the utterance. For example, the second-pass portion can include a listen attend spell (LAS) decoder. Various implementations include a shared encoder shared between the RNN-T decoder and the LAS decoder.

INFERENCE METHOD AND DEVICE USING SPIKING NEURAL NETWORK

NºPublicación:  US2024419969A1 19/12/2024
Solicitante: 
SAMSUNG ELECTRONICS CO LTD [KR]
Samsung Electronics Co., Ltd
KR_20200098308_A

Resumen de: US2024419969A1

Embodiments relate to an inference method and device using a spiking neural network including parameters determined using an analog-valued neural network (ANN). The spiking neural network used in the inference method and device includes an artificial neuron that may have a negative membrane potential or have a pre-charged membrane potential. Additionally, an inference operation by the inference method and device is performed after a predetermined time from an operating time point of the spiking neural network.

OPTIMIZING NEURAL NETWORK STRUCTURES FOR EMBEDDED SYSTEMS

NºPublicación:  US2024419968A1 19/12/2024
Solicitante: 
TESLA INC [US]
Tesla, Inc
US_2023289599_PA

Resumen de: US2024419968A1

A model training and implementation pipeline trains models for individual embedded systems. The pipeline iterates through multiple models and estimates the performance of the models. During a model generation stage, the pipeline translates the description of the model together with the model parameters into an intermediate representation in a language that is compatible with a virtual machine. The intermediate representation is agnostic or independent to the configuration of the target platform. During a model performance estimation stage, the pipeline evaluates the performance of the models without training the models. Based on the analysis of the performance of the untrained models, a subset of models is selected. The selected models are then trained and the performance of the trained models are analyzed. Based on the analysis of the performance of the trained models, a single model is selected for deployment to the target platform.

JOINT AUTOMATIC SPEECH RECOGNITION AND SPEAKER DIARIZATION

NºPublicación:  US2024420701A1 19/12/2024
Solicitante: 
GOOGLE LLC [US]
Google LLC
US_2022199094_A1

Resumen de: US2024420701A1

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing audio data using neural networks.

Neural network architecture for transaction data processing

NºPublicación:  GB2631068A 18/12/2024
Solicitante: 
FEATURESPACE LTD [GB]
Featurespace Limited
GB_2631068_PA

Resumen de: GB2631068A

A machine learning system 600 for processing transaction data. The machine learning system 600 has a first processing stage 603 with: an interface 622 to receive a vector representation of a previous state for the first processing stage; a time difference encoding 628 to generate a vector representation of a time difference between the current and previous iterations/transactions. Combinatory logic (632, fig 6b) modifies the vector representation of the previous state based on the time difference encoding. Logic (634, fig 6b) combines the modified vector representation and a representation of the current transaction data to generate a vector representation of the current state. The machine learning system 600 also has a second processing stage 604 with a neural network architecture 660, 690 to receive data from the first processing stage and to map said data to a scalar value 602 representative of a likelihood that the proposed transaction presents an anomaly within a sequence of actions. The scalar value is used to determine whether to approve or decline the proposed transaction. The first stage may comprise a recurrent neural network and the second stage may comprise multiple attention heads.

STEP-UNROLLED DENOISING NEURAL NETWORKS

NºPublicación:  US2024412042A1 12/12/2024
Solicitante: 
DEEPMIND TECH LTD [GB]
DeepMind Technologies Limited
US_2024412042_PA

Resumen de: US2024412042A1

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating output sequences using a non-auto-regressive neural network.

In Silico Generation of Binding Agents

NºPublicación:  US2024412810A1 12/12/2024
Solicitante: 
FLAGSHIP PIONEERING INNOVATIONS VI LLC [US]
Flagship Pioneering Innovations VI, LLC
US_2024412810_PA

Resumen de: US2024412810A1

In some embodiments, methods and corresponding systems are disclosed for providing associated biopolymer sequence(s) to conform to a reference structure. The reference structure includes a target complex and the one or more associated biopolymer sequences. The biopolymer sequences are obtainable by the method, including embedding a graph representation using a neural network. The graph representation is featurized from the reference structure and includes a topology of the biopolymer with monomers as nodes and interactions between monomers as edges. The methods, in certain embodiments, further include processing the graph representation with a graph neural network or equivariant neural network that iteratively updates node and edge embeddings with a learned parametric function. The methods may further include converting the embedded graph representation to an energy landscape using a decoder. The methods can further include obtaining one or more biopolymer sequences from the energy landscape.

SPARSITY-AWARE NEURAL NETWORK PROCESSING

NºPublicación:  US2024412051A1 12/12/2024
Solicitante: 
MICROSOFT TECHNOLOGY LICENSING LLC [US]
Microsoft Technology Licensing, LLC
US_2024412051_PA

Resumen de: US2024412051A1

Various embodiments discussed herein are directed to improving hardware consumption and computing performance by performing neural network operations on dense tensors using sparse value information from original tensors. Such dense tensors are condensed representations of other original tensors that include zeros or other sparse values. In order to perform these operations, particular embodiments provide an indication, via a binary map, of a position of where the sparse values and non-sparse values are in the original tensors. Particular embodiments additionally or alternatively determine shape data of the original tensors so that these operations are accurate.

OUT-OF-DISTRIBUTION DETECTION FOR PERSONALIZING NEURAL NETWORK MODELS

NºPublicación:  US2024412084A1 12/12/2024
Solicitante: 
QUALCOMM INC [US]
QUALCOMM Incorporated
US_2024412084_PA

Resumen de: US2024412084A1

A method for generating a personalized artificial neural network (ANN) model receives an input at a first artificial neural network. The input is processed to extract a set of intermediate features. The method determines if the input is out-of-distribution relative to a dataset used to train the first artificial neural network. The intermediate features corresponding to the input are provided to a second artificial neural network bases on the out-of-distribution determination. Additionally, the system resources for performing the training and inference tasks of the first artificial neural network and the second, personalized artificial neural network are allocated according to a computational complexity of the training and inference tasks and a power consumption of the resources.

DYNAMIC PRECISION MANAGEMENT FOR INTEGER DEEP LEARNING PRIMITIVES

NºPublicación:  US2024412318A1 12/12/2024
Solicitante: 
INTEL CORP [US]
Intel Corporation
US_2024412318_PA

Resumen de: US2024412318A1

One embodiment provides for a graphics processing unit to perform computations associated with a neural network, the graphics processing unit comprising a hardware processing unit having a dynamic precision fixed-point unit that is configurable to convert elements of a floating-point tensor to convert the floating-point tensor into a fixed-point tensor.

COMPUTE OPTIMIZATION MECHANISM FOR DEEP NEURAL NETWORKS

NºPublicación:  EP4475067A1 11/12/2024
Solicitante: 
INTEL CORP [US]
INTEL Corporation
EP_4475067_A1

Resumen de: EP4475067A1

A graphics processing unit (GPU) is disclosed. The GPU comprises a host interface to enable a connection with a host processor, a set of compute clusters, each including a set of graphics multiprocessors, a cache memory shared by the set of compute clusters, and memory coupled with the set of compute units via a set of memory controllers. The graphics multiprocessors of a compute cluster include multiple types of integer and floating point logic units to perform computational operations at a range of precisions.

CONSTRAINED DEVICE PLACEMENT USING NEURAL NETWORKS

NºPublicación:  US2024403660A1 05/12/2024
Solicitante: 
GOOGLE LLC [US]
Google LLC
WO_2023059811_PA

Resumen de: US2024403660A1

Systems and methods for determining a placement for computational graph across multiple hardware devices. One of the methods includes generating a policy output using a policy neural network and using the policy output to generate a final placement that satisfies one or more constraints.

CONTROL SYSTEM WITH OPTIMIZATION OF NEURAL NETWORK PREDICTOR

NºPublicación:  AU2023280790A1 05/12/2024
Solicitante: 
IMUBIT ISRAEL LTD
IMUBIT ISRAEL LTD
AU_2023280790_PA

Resumen de: AU2023280790A1

A predictive control system includes controllable equipment and a controller. The controller is configured to use a neural network model to predict values of controlled variables predicted to result from operating the controllable equipment in accordance with corresponding values of manipulated variables, use the values of the controlled variables predicted by the neural network model to evaluate an objective function that defines a control objective as a function of at least the controlled variables, perform a predictive optimization process to generate optimal values of the manipulated variables for a plurality of time steps in an optimization period using the neural network model and the objective function, and operate the controllable equipment by providing the controllable equipment with control signals based on the optimal values of the manipulated variables generated by performing the predictive optimization process.

MACHINE LEARNING ACCELERATOR MECHANISM

NºPublicación:  US2024403620A1 05/12/2024
Solicitante: 
INTEL CORP [US]
Intel Corporation
US_2023053289_PA

Resumen de: US2024403620A1

An apparatus to facilitate acceleration of machine learning operations is disclosed. The apparatus comprises at least one processor to perform operations to implement a neural network and accelerator logic to perform communicatively coupled to the processor to perform compute operations for the neural network.

MULTIVARIABLE TIME SERIES PROCESSING METHOD, DEVICE AND MEDIUM

NºPublicación:  US2024403382A1 05/12/2024
Solicitante: 
BEIJING VOLCANO ENGINE TECH CO LTD [CN]
Beijing Volcano Engine Technology Co., Ltd
KR_20240148369_PA

Resumen de: US2024403382A1

Provided are a method and a device of multivariable time series processing. The method comprises obtaining a time series set comprising a plurality of first time series segments having a same length and being a multivariable time series; inputting the first time series segment into a graph neural network to predict a multivariable reference value corresponding to a first time point that is a next time point adjacent to a latest time point in the first time series segment; determining an optimization function based on multivariable reference values corresponding to a plurality of the first time points and corresponding multivariable series tags; determining values of respective parameters in the causal matrix with an objective of minimizing the optimization function; and determining, based on the values of the respective parameters in the causal matrix, a causal relationship between multiple variables in the multivariable time series.

KNOWLEDGE-AUGMENTED FEATURE ADAPTER FOR VISION-LANGUAGE MODEL NEURAL NETWORKS

NºPublicación:  WO2024248787A1 05/12/2024
Solicitante: 
GOOGLE LLC [US]
GOOGLE LLC

Resumen de: WO2024248787A1

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing a multi-modal machine learning task on a network input that includes text and an image to generate a network output. One of the systems includes a vision-language model (VLM) neural network. The VLM neural network includes a VLM backbone neural network and an attention-based feature adapter. The VLM neural network has access to an external dataset that stores multiple text items.

END-TO-END STREAMING KEYWORD SPOTTING

NºPublicación:  EP4471670A2 04/12/2024
Solicitante: 
GOOGLE LLC [US]
Google LLC
EP_4471670_PA

Resumen de: EP4471670A2

A method (600) for detecting a hotword includes receiving a sequence of input frames (210) that characterize streaming audio (118) captured by a user device (102) and generating a probability score (350) indicating a presence of the hotword in the streaming audio using a memorized neural network (300). The network includes sequentially-stacked single value decomposition filter (SVDF) layers (302) and each SVDF layer includes at least one neuron (312). Each neuron includes a respective memory component (330), a first stage (320) configured to perform filtering on audio features (410) of each input frame individually and output to the memory component, and a second stage (340) configured to perform filtering on all the filtered audio features residing in the respective memory component. The method also includes determining whether the probability score satisfies a hotword detection threshold and initiating a wake-up process on the user device for processing additional terms.

ELECTRONIC DEVICE FOR DETERMINING INFERENCE DISTRIBUTION RATIO OF ARTIFICIAL NEURAL NETWORK AND OPERATION METHOD THEREOF

NºPublicación:  EP4471668A1 04/12/2024
Solicitante: 
SAMSUNG ELECTRONICS CO LTD [KR]
Samsung Electronics Co., Ltd
EP_4471668_PA

Resumen de: EP4471668A1

Provided is an electronic device including a memory storing a state inference model, and at least one instruction; a transceiver; and at least one processor configured to execute the at least one instruction to: obtain, via the transceiver, first state information of each of a plurality of devices at a first time point, obtain second state information of each of the plurality of devices at a second time point that is a preset time interval after the first time point, by inputting the first state information to the state inference model, and determine an inference distribution ratio of the artificial neural network of each of the plurality of devices, based on the second state information of each of the plurality of devices, where the electronic device is determined among the plurality of devices, based on network states of the plurality of devices.

METHOD AND APPARATUS FOR COMPILING NEURAL NETWORK MODEL, AND METHOD AND APPARATUS FOR TRAINING OPTIMIZATION MODEL

NºPublicación:  US2024394531A1 28/11/2024
Solicitante: 
BEIJING HORIZON INFORMATION TECH CO LTD [CN]
BEIJING HORIZON INFORMATION TECHNOLOGY CO., LTD
EP_4468202_PA

Resumen de: US2024394531A1

Embodiments of this disclosure provide a method and apparatus for compiling a neural network model, and a method and an apparatus for training an optimization model. The method includes: obtaining a to-be-compiled neural network model; determining an intermediate instruction sequence corresponding to the to-be-compiled neural network model based on the to-be-compiled neural network model; processing the intermediate instruction sequence by using a pre-trained instruction sequence optimization model, to obtain a target optimization parameter corresponding to the intermediate instruction sequence; determining an optimization instruction sequence corresponding to the intermediate instruction sequence based on the target optimization parameter; and converting the optimization instruction sequence into an executable instruction sequence, to obtain a target instruction sequence that is executable by a neural network processor corresponding to the to-be-compiled neural network model. According to the embodiments of this disclosure, compilation time can be greatly reduced, thereby effectively improving compilation efficiency.

METHODS AND SYSTEMS FOR WORD EDIT DISTANCE EMBEDDING

NºPublicación:  US2024395245A1 28/11/2024
Solicitante: 
COLLIBRA BELGIUM BV [BE]
Collibra Belgium BV
MX_2022011471_A

Resumen de: US2024395245A1

A system for classifying words in a batch of words can include at least one memory device storing instructions for causing at least one processor to create dictionary vectors for each of a plurality of dictionary words using a neural network (NN), store each dictionary vector along with a classification indicator corresponding to the associated dictionary word, and create word vectors for each word in a batch of words for classification using the NN. The closest matching dictionary vectors are found for each word vector and the classification indicators of the closest matching dictionary vector for each word vector in the batch is reported.

SYSTEMS AND METHODS FOR PROCESSING DATA COLLECTED IN AN INDUSTRIAL ENVIRONMENT USING NEURAL NETWORKS

NºPublicación:  US2024396647A1 28/11/2024
Solicitante: 
STRONG FORCE IOT PORTFOLIO 2016 LLC [US]
Strong Force IoT Portfolio 2016, LLC
US_2024214086_PA

Resumen de: US2024396647A1

Methods and an expert system for processing a plurality of inputs collected from sensors in an industrial environment are disclosed. A modular neural network, where the expert system uses one type of neural network for recognizing a pattern relating to at least one of: the sensors, components of the industrial environment and a different neural network for self-organizing a data collection activity in the industrial environment is disclosed. A data communication network configured to communicate at least a portion of the plurality of inputs collected from the sensors to storage device is also disclosed.

NEURAL CONTEXTUAL BANDIT BASED COMPUTATIONAL RECOMMENDATION METHOD AND APPARATUS

NºPublicación:  US2024394776A1 28/11/2024
Solicitante: 
YAHOO ASSETS LLC [US]
YAHOO ASSETS LLC
US_2023196441_PA

Resumen de: US2024394776A1

Disclosed are systems and methods utilizing neural contextual bandit for improving interactions with and between computers in content generating, searching, hosting and/or providing systems supported by or configured with personal computing devices, servers and/or platforms. The systems interact to make item recommendations using latent relations and latent representations, which can improve the quality of data used in processing interactions between or among processors in such systems. The disclosed systems and methods use neural network modeling in automatic selection of a number of items for recommendation to a user and using feedback in connection with the recommendation for further training of the model(s).

VECTOR-BASED SEARCH RESULT GENERATION

NºPublicación:  US2024394236A1 28/11/2024
Solicitante: 
YEXT INC [US]
Yext, Inc
WO_2022094113_A1

Resumen de: US2024394236A1

A system and method to generate search results in response to a search query based on comparisons of embedding vectors. The system and method receive, from an end user system, a search query including a set of keywords associated with the entity. Using a neural network, an embedding vector is identified based on the set of keywords of the search query. The system and method compares the embedding vector associated with the search query to a set of embedding vectors associated with a set of structured data elements relating to the entity. Based on the comparison, a set of matching structured data elements is identified. The system and method generate a search result in response to the search query, wherein the search result includes at least a portion of the set of matching structured data elements. The search result is displayed via an interface of the end user system.

SYSTEM, METHOD, AND APPARATUS FOR RECURRENT NEURAL NETWORKS

Nº publicación: US2024394019A1 28/11/2024

Solicitante:

NEW YORK UNIV [US]
NEW YORK UNIVERSITY

US_2023195421_PA

Resumen de: US2024394019A1

A method for computation with recurrent neural networks includes receiving an input drive and a recurrent drive, producing at least one modulatory response; computing at least one output response, each output response including a sum of: (1) the input drive multiplied by a function of at least one of the at least one modulatory response, each input drive including a function of at least one input, and (2) the recurrent drive multiplied by a function of at least one of the at least one modulatory response, each recurrent drive including a function of the at least one output response, each modulatory response including a function of at least one of (i) the at least one input, (ii) the at least one output response, or (iii) at least one first offset, and computing a readout of the at least one output response.

traducir