Ministerio de Industria, Turismo y Comercio LogoMinisterior
 

Xarxes Neuronals

Resultados 23 resultados
LastUpdate Última actualización 30/12/2024 [07:14:00]
pdfxls
Solicitudes publicadas en los últimos 30 días / Applications published in the last 30 days
Resultados 1 a 23  

INFERENCE METHOD AND DEVICE USING SPIKING NEURAL NETWORK

NºPublicación:  US2024419969A1 19/12/2024
Solicitante: 
SAMSUNG ELECTRONICS CO LTD [KR]
Samsung Electronics Co., Ltd
KR_20200098308_A

Resumen de: US2024419969A1

Embodiments relate to an inference method and device using a spiking neural network including parameters determined using an analog-valued neural network (ANN). The spiking neural network used in the inference method and device includes an artificial neuron that may have a negative membrane potential or have a pre-charged membrane potential. Additionally, an inference operation by the inference method and device is performed after a predetermined time from an operating time point of the spiking neural network.

JOINT AUTOMATIC SPEECH RECOGNITION AND SPEAKER DIARIZATION

NºPublicación:  US2024420701A1 19/12/2024
Solicitante: 
GOOGLE LLC [US]
Google LLC
US_2022199094_A1

Resumen de: US2024420701A1

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing audio data using neural networks.

OPTIMIZING NEURAL NETWORK STRUCTURES FOR EMBEDDED SYSTEMS

NºPublicación:  US2024419968A1 19/12/2024
Solicitante: 
TESLA INC [US]
Tesla, Inc
US_2023289599_PA

Resumen de: US2024419968A1

A model training and implementation pipeline trains models for individual embedded systems. The pipeline iterates through multiple models and estimates the performance of the models. During a model generation stage, the pipeline translates the description of the model together with the model parameters into an intermediate representation in a language that is compatible with a virtual machine. The intermediate representation is agnostic or independent to the configuration of the target platform. During a model performance estimation stage, the pipeline evaluates the performance of the models without training the models. Based on the analysis of the performance of the untrained models, a subset of models is selected. The selected models are then trained and the performance of the trained models are analyzed. Based on the analysis of the performance of the trained models, a single model is selected for deployment to the target platform.

TWO-PASS END TO END SPEECH RECOGNITION

NºPublicación:  US2024420687A1 19/12/2024
Solicitante: 
GOOGLE LLC [US]
GOOGLE LLC
US_2022238101_A1

Resumen de: US2024420687A1

Two-pass automatic speech recognition (ASR) models can be used to perform streaming on-device ASR to generate a text representation of an utterance captured in audio data. Various implementations include a first-pass portion of the ASR model used to generate streaming candidate recognition(s) of an utterance captured in audio data. For example, the first-pass portion can include a recurrent neural network transformer (RNN-T) decoder. Various implementations include a second-pass portion of the ASR model used to revise the streaming candidate recognition(s) of the utterance and generate a text representation of the utterance. For example, the second-pass portion can include a listen attend spell (LAS) decoder. Various implementations include a shared encoder shared between the RNN-T decoder and the LAS decoder.

SYSTEM AND METHOD FOR OPTIMIZING NON-LINEAR CONSTRAINTS OF AN INDUSTRIAL PROCESS UNIT

NºPublicación:  US2024419136A1 19/12/2024
Solicitante: 
JIO PLATFORMS LTD [IN]
JIO PLATFORMS LIMITED
WO_2023073655_PA

Resumen de: US2024419136A1

The present invention provides a robust and effective solution to an entity or an organization by enabling them to implement a system for facilitating creation of a digital twin of a process unit which can perform constrained optimization of control parameters to minimize or maximize an objective function. The system can capture non-linearities of the industrial process while the current Industrial Process models try to approximate non-linear process using linear approximation, which are not as accurate as Neural Networks. The proposed system can further create an end-to-end differentiable digital twin model of a process unit, and uses gradient flows for optimization as compared to other digital twin models that are gradient-free.

Neural network architecture for transaction data processing

NºPublicación:  GB2631068A 18/12/2024
Solicitante: 
FEATURESPACE LTD [GB]
Featurespace Limited
GB_2631068_PA

Resumen de: GB2631068A

A machine learning system 600 for processing transaction data. The machine learning system 600 has a first processing stage 603 with: an interface 622 to receive a vector representation of a previous state for the first processing stage; a time difference encoding 628 to generate a vector representation of a time difference between the current and previous iterations/transactions. Combinatory logic (632, fig 6b) modifies the vector representation of the previous state based on the time difference encoding. Logic (634, fig 6b) combines the modified vector representation and a representation of the current transaction data to generate a vector representation of the current state. The machine learning system 600 also has a second processing stage 604 with a neural network architecture 660, 690 to receive data from the first processing stage and to map said data to a scalar value 602 representative of a likelihood that the proposed transaction presents an anomaly within a sequence of actions. The scalar value is used to determine whether to approve or decline the proposed transaction. The first stage may comprise a recurrent neural network and the second stage may comprise multiple attention heads.

SPARSITY-AWARE NEURAL NETWORK PROCESSING

NºPublicación:  US2024412051A1 12/12/2024
Solicitante: 
MICROSOFT TECHNOLOGY LICENSING LLC [US]
Microsoft Technology Licensing, LLC
US_2024412051_PA

Resumen de: US2024412051A1

Various embodiments discussed herein are directed to improving hardware consumption and computing performance by performing neural network operations on dense tensors using sparse value information from original tensors. Such dense tensors are condensed representations of other original tensors that include zeros or other sparse values. In order to perform these operations, particular embodiments provide an indication, via a binary map, of a position of where the sparse values and non-sparse values are in the original tensors. Particular embodiments additionally or alternatively determine shape data of the original tensors so that these operations are accurate.

NON-LINEAR APPROXIMATION ROBUST TO INPUT RANGE OF HOMOMORPHIC ENCRYPTION ANALYTICS

NºPublicación:  US2024413966A1 12/12/2024
Solicitante: 
IBM [US]
International Business Machines Corporation
US_2024413966_PA

Resumen de: US2024413966A1

A technique for privacy-preserving homomorphic inferencing using a neural network having an activation function, such as a non-linear high-degree polynomial. The network is trained to learn input features of an input feature vector together with their associated inverses. During inferencing, an encrypted data point is received at the network. The data point comprises an input feature vector that has been extended with a set of one or more additional feature values, the set of one or more additional feature values having been generated by applying a normalized inverse function to respective one or more features in the feature vector. Homomorphic inferencing is performed on the encrypted data point using the machine learning model to generate an encrypted result, which is then returned. By applying the normalized inverse function, the high-degree polynomial can use any value of an input feature during inferencing, whether the value is within or outside of a particular input range.

OUT-OF-DISTRIBUTION DETECTION FOR PERSONALIZING NEURAL NETWORK MODELS

NºPublicación:  US2024412084A1 12/12/2024
Solicitante: 
QUALCOMM INC [US]
QUALCOMM Incorporated
US_2024412084_PA

Resumen de: US2024412084A1

A method for generating a personalized artificial neural network (ANN) model receives an input at a first artificial neural network. The input is processed to extract a set of intermediate features. The method determines if the input is out-of-distribution relative to a dataset used to train the first artificial neural network. The intermediate features corresponding to the input are provided to a second artificial neural network bases on the out-of-distribution determination. Additionally, the system resources for performing the training and inference tasks of the first artificial neural network and the second, personalized artificial neural network are allocated according to a computational complexity of the training and inference tasks and a power consumption of the resources.

STEP-UNROLLED DENOISING NEURAL NETWORKS

NºPublicación:  US2024412042A1 12/12/2024
Solicitante: 
DEEPMIND TECH LTD [GB]
DeepMind Technologies Limited
US_2024412042_PA

Resumen de: US2024412042A1

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating output sequences using a non-auto-regressive neural network.

In Silico Generation of Binding Agents

NºPublicación:  US2024412810A1 12/12/2024
Solicitante: 
FLAGSHIP PIONEERING INNOVATIONS VI LLC [US]
Flagship Pioneering Innovations VI, LLC
US_2024412810_PA

Resumen de: US2024412810A1

In some embodiments, methods and corresponding systems are disclosed for providing associated biopolymer sequence(s) to conform to a reference structure. The reference structure includes a target complex and the one or more associated biopolymer sequences. The biopolymer sequences are obtainable by the method, including embedding a graph representation using a neural network. The graph representation is featurized from the reference structure and includes a topology of the biopolymer with monomers as nodes and interactions between monomers as edges. The methods, in certain embodiments, further include processing the graph representation with a graph neural network or equivariant neural network that iteratively updates node and edge embeddings with a learned parametric function. The methods may further include converting the embedded graph representation to an energy landscape using a decoder. The methods can further include obtaining one or more biopolymer sequences from the energy landscape.

DYNAMIC PRECISION MANAGEMENT FOR INTEGER DEEP LEARNING PRIMITIVES

NºPublicación:  US2024412318A1 12/12/2024
Solicitante: 
INTEL CORP [US]
Intel Corporation
US_2024412318_PA

Resumen de: US2024412318A1

One embodiment provides for a graphics processing unit to perform computations associated with a neural network, the graphics processing unit comprising a hardware processing unit having a dynamic precision fixed-point unit that is configurable to convert elements of a floating-point tensor to convert the floating-point tensor into a fixed-point tensor.

COMPUTE OPTIMIZATION MECHANISM FOR DEEP NEURAL NETWORKS

NºPublicación:  EP4475067A1 11/12/2024
Solicitante: 
INTEL CORP [US]
INTEL Corporation
EP_4475067_A1

Resumen de: EP4475067A1

A graphics processing unit (GPU) is disclosed. The GPU comprises a host interface to enable a connection with a host processor, a set of compute clusters, each including a set of graphics multiprocessors, a cache memory shared by the set of compute clusters, and memory coupled with the set of compute units via a set of memory controllers. The graphics multiprocessors of a compute cluster include multiple types of integer and floating point logic units to perform computational operations at a range of precisions.

MACHINE LEARNING ACCELERATOR MECHANISM

NºPublicación:  US2024403620A1 05/12/2024
Solicitante: 
INTEL CORP [US]
Intel Corporation
US_2023053289_PA

Resumen de: US2024403620A1

An apparatus to facilitate acceleration of machine learning operations is disclosed. The apparatus comprises at least one processor to perform operations to implement a neural network and accelerator logic to perform communicatively coupled to the processor to perform compute operations for the neural network.

CONTROL SYSTEM WITH OPTIMIZATION OF NEURAL NETWORK PREDICTOR

NºPublicación:  AU2023280790A1 05/12/2024
Solicitante: 
IMUBIT ISRAEL LTD
IMUBIT ISRAEL LTD
AU_2023280790_PA

Resumen de: AU2023280790A1

A predictive control system includes controllable equipment and a controller. The controller is configured to use a neural network model to predict values of controlled variables predicted to result from operating the controllable equipment in accordance with corresponding values of manipulated variables, use the values of the controlled variables predicted by the neural network model to evaluate an objective function that defines a control objective as a function of at least the controlled variables, perform a predictive optimization process to generate optimal values of the manipulated variables for a plurality of time steps in an optimization period using the neural network model and the objective function, and operate the controllable equipment by providing the controllable equipment with control signals based on the optimal values of the manipulated variables generated by performing the predictive optimization process.

METHOD FOR INDUCTIVE KNOWLEDGE GRAPH EMBEDDING USING RELATION GRAPHS AND SYSTEM THEREOF

NºPublicación:  US2024403601A1 05/12/2024
Solicitante: 
KOREA ADVANCED INST SCI & TECH [KR]
Korea Advanced Institute of Science and Technology
US_2024403601_PA

Resumen de: US2024403601A1

Disclosed is an inductive knowledge graph embedding method and system through a relation graph. An inductive knowledge graph embedding method performed by a knowledge graph embedding system may include training a graph neural network-based knowledge graph embedding model using a knowledge graph and a relation graph generated from the knowledge graph; and performing link prediction for the knowledge graph that includes a new relation and a new entity through the trained graph neural network-based knowledge graph embedding model.

MULTIVARIABLE TIME SERIES PROCESSING METHOD, DEVICE AND MEDIUM

NºPublicación:  US2024403382A1 05/12/2024
Solicitante: 
BEIJING VOLCANO ENGINE TECH CO LTD [CN]
Beijing Volcano Engine Technology Co., Ltd
KR_20240148369_PA

Resumen de: US2024403382A1

Provided are a method and a device of multivariable time series processing. The method comprises obtaining a time series set comprising a plurality of first time series segments having a same length and being a multivariable time series; inputting the first time series segment into a graph neural network to predict a multivariable reference value corresponding to a first time point that is a next time point adjacent to a latest time point in the first time series segment; determining an optimization function based on multivariable reference values corresponding to a plurality of the first time points and corresponding multivariable series tags; determining values of respective parameters in the causal matrix with an objective of minimizing the optimization function; and determining, based on the values of the respective parameters in the causal matrix, a causal relationship between multiple variables in the multivariable time series.

MULTIMODAL DEEP LEARNING WITH BOOSTED TREES

NºPublicación:  US2024403605A1 05/12/2024
Solicitante: 
IBM [US]
International Business Machines Corporation
US_2024403605_PA

Resumen de: US2024403605A1

Aspects of the invention include techniques for leveraging a joint knowledge distillation-based approach for multimodal deep learning with boosted trees. A non-limiting example method includes training a plurality of unimodal teacher models. Each unimodal teacher model of the plurality of unimodal teacher models can be trained using training data from a unique modality of a plurality of modalities. For each of the unimodal teacher models, a respective student encoder of a plurality of student encoders is trained using a knowledge distillation such that one or more features for each respective student encoder are forced to a same feature of the respective unimodal teacher model. A concatenation of outputs from the plurality of student encoders are used to train a fusion neural network of the multimodal neural network. Data is received from the plurality of modalities and a prediction is generated from an output layer of the trained fusion neural network.

CONSTRAINED DEVICE PLACEMENT USING NEURAL NETWORKS

NºPublicación:  US2024403660A1 05/12/2024
Solicitante: 
GOOGLE LLC [US]
Google LLC
WO_2023059811_PA

Resumen de: US2024403660A1

Systems and methods for determining a placement for computational graph across multiple hardware devices. One of the methods includes generating a policy output using a policy neural network and using the policy output to generate a final placement that satisfies one or more constraints.

KNOWLEDGE-AUGMENTED FEATURE ADAPTER FOR VISION-LANGUAGE MODEL NEURAL NETWORKS

NºPublicación:  WO2024248787A1 05/12/2024
Solicitante: 
GOOGLE LLC [US]
GOOGLE LLC

Resumen de: WO2024248787A1

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing a multi-modal machine learning task on a network input that includes text and an image to generate a network output. One of the systems includes a vision-language model (VLM) neural network. The VLM neural network includes a VLM backbone neural network and an attention-based feature adapter. The VLM neural network has access to an external dataset that stores multiple text items.

Application Prototyping Systems And Methods

NºPublicación:  US2024403667A1 05/12/2024
Solicitante: 
KINARA INC [US]
Kinara, Inc
US_2024403667_PA

Resumen de: US2024403667A1

Application prototyping systems and methods are disclosed. One aspect is a processing method for multiple computing devices that includes identifying resource constraints for the multiple computing devices. Using identified resource constraints, a presentation model having a plurality of modifiable parameters based at least in part based on the resource constraints is created. At least one inference engine supporting neural network processing is used to execute a particular neural network model based at least in part on the presentation model.

ELECTRONIC DEVICE FOR DETERMINING INFERENCE DISTRIBUTION RATIO OF ARTIFICIAL NEURAL NETWORK AND OPERATION METHOD THEREOF

NºPublicación:  EP4471668A1 04/12/2024
Solicitante: 
SAMSUNG ELECTRONICS CO LTD [KR]
Samsung Electronics Co., Ltd
EP_4471668_PA

Resumen de: EP4471668A1

Provided is an electronic device including a memory storing a state inference model, and at least one instruction; a transceiver; and at least one processor configured to execute the at least one instruction to: obtain, via the transceiver, first state information of each of a plurality of devices at a first time point, obtain second state information of each of the plurality of devices at a second time point that is a preset time interval after the first time point, by inputting the first state information to the state inference model, and determine an inference distribution ratio of the artificial neural network of each of the plurality of devices, based on the second state information of each of the plurality of devices, where the electronic device is determined among the plurality of devices, based on network states of the plurality of devices.

END-TO-END STREAMING KEYWORD SPOTTING

Nº publicación: EP4471670A2 04/12/2024

Solicitante:

GOOGLE LLC [US]
Google LLC

EP_4471670_PA

Resumen de: EP4471670A2

A method (600) for detecting a hotword includes receiving a sequence of input frames (210) that characterize streaming audio (118) captured by a user device (102) and generating a probability score (350) indicating a presence of the hotword in the streaming audio using a memorized neural network (300). The network includes sequentially-stacked single value decomposition filter (SVDF) layers (302) and each SVDF layer includes at least one neuron (312). Each neuron includes a respective memory component (330), a first stage (320) configured to perform filtering on audio features (410) of each input frame individually and output to the memory component, and a second stage (340) configured to perform filtering on all the filtered audio features residing in the respective memory component. The method also includes determining whether the probability score satisfies a hotword detection threshold and initiating a wake-up process on the user device for processing additional terms.

traducir