Resumen de: WO2025134467A1
A multi-core processor having a plurality of cores each including a plurality of processor elements, wherein: the multi-core processor executes: at least one of Classification processing, Detection processing, Segmentation processing, and Pose Estimation processing, as Deep Neural Network (DNN) processing using a DNN, on vehicle-exterior image data input to a control device; at least one of water droplet/dirt removal processing, distortion correction processing, resizing processing, cutout processing, and luminance standardization processing, as pre-processing for the DNN processing; and at least one of NMS processing and top K processing as post-processing for the DNN processing.
Resumen de: US2025209320A1
A hardware system is provided which includes a neural network. The neural network comprises nodes interconnected by synapses implemented by respective hardware devices. The hardware devices are configured to generate an output by performing an inference operation using the neural network. The operation of the synapses is controlled by setting a physical property of the respective hardware devices implementing the respective synapses, at least one of setting or reading the physical property being subject to noise. The neural network associates probabilistic weight distributions with respective synapses. Setting the physical property of a given synapse comprises applying a weight value sampled from the weight distribution corresponding to that synapse. Performing the inference operation comprises performing multiple inference determinations using multiple respective sampled weight values for the synapses to obtain multiple inference results. The multiple inference results indicate a confidence interval for the output of the inference operation. The use of multiple inference determinations acts further to suppress the effect of noise for at least one of setting or reading the physical property of the synapses. Such a hardware system may also be used for generating and training the neural network.
Resumen de: US2025209315A1
A method of operating an artificial neural network model including a plurality of nodes includes: dividing the artificial neural network model into a divided artificial neural network including plurality node groups using a first grouping manner, allocating the plurality of node groups to a plurality of first hardware accelerators and a plurality of second hardware accelerators using a first corresponding manner to generate an allocation, executing the divided artificial neural network model on a plurality of input values to generate a plurality of inference results values, for each of the plurality of inference result values, recording activation area information of the plurality of node groups and a call count, and performing at least one of a first operation to change the allocation and a second operation to change the divided artificial neural network based on the activation area information and the call count.
Resumen de: AU2023346892A1
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for enabling a user to conduct a dialogue. Implementations of the system learn when to rely on supporting evidence, obtained from an external search system via a search system interface, and are also able to generate replies for the user that align with the preferences of a previously trained response selection neural network. Implementations of the system can also use a previously trained rule violation detection neural network to generate replies that take account of previously learnt rules.
Resumen de: WO2024038114A1
Methods, systems, and computer readable storage media for performing operations comprising: obtaining a plurality of initial network inputs that have been classified as belonging to a corresponding ground truth class; processing each of the plurality of initial network inputs using a trained target neural network to generate a respective predicted network output for each initial network input, the respective predicted network output comprising a respective score for each of a plurality of classes, the plurality of classes comprising the ground truth class; identifying, based on the respective predicted network outputs and the ground truth class, a subset of the initial network inputs as having been misclassified by the trained target neural network; and determining, based on the subset of initial network inputs, one or more failure case latent representations, wherein each failure case latent representation is a latent representation that characterizes network inputs that belong to the ground truth class but that are likely to be misclassified by the trained target neural network.
Resumen de: EP4575780A1
A method of generating schedules for an adaptive embedded system, the method comprising: deriving task sets of all possible tasks to be performed by the embedded system; deriving sets of all possible hardware configurations of the embedded system; creating a multi-model system having a multi-model defining the adaptivity of the system for all possible tasks and all possible hardware and all combinations thereof, the adaptivity defining how the system can change operation responsive to a mode change requirement and/or occurrence of a fault; solving a scheduling problem for the models of the multi-model system in a neuromorphic accelerator implemented by spiked neural networks; and providing schedule instructions to the system, for performance of tasks, based on the solution.
Resumen de: GB2636668A
Deriving insights from time series data can include receiving subject matter expert (SME) input characterizing one or more aspects of a time series. A model template that specifies one or more components of the time series can be generated by translating the SME input using a rule-based translator. A machine learning model based on the model template can be a multilayer neural network having one or more component definition layers, each configured to extract one of the one or more components from time series data input corresponding to an instantiation of the time series. With respect to a decision generated by the machine learning model based on the time series data input, a component-wise contribution of each of the one or more components to the decision can be determined. An output can be generated, the output including the component-wise contribution of at least one of the one or more components.
Resumen de: US2025200738A1
Various embodiments of the teachings herein include a method for accelerating deep learning inference of a neural network with layers. An example includes: generating a line-wise image consisting of pixels by a line-camera scanning an object; and for each generated new pixel-line, for calculations in the current layer which do not involve the new pixel-line, using results of previous calculations instead of repeating a calculation of a value of a pixel in the next layer.
Resumen de: US2025200696A1
One embodiment provides for a method of transmitting data between multiple compute nodes of a distributed compute system, the method comprising multi-dimensionally partitioning data of a feature map across multiple nodes for distributed training of a convolutional neural network; performing a parallel convolution operation on the multiple partitions to train weight data of the neural network; and exchanging data between nodes to enable computation of halo regions, the halo regions having dependencies on data processed by a different node.
Resumen de: US2025200360A1
A method and apparatus with organic molecule spectrum prediction are disclosed. The method includes accessing a molecular structure representation of an organic molecule; generating parameters of an approximated Franck-Condon progression by inputting the molecular structure representation to a neural network model that infers the parameters from the molecular structure representation; and generating a spectrum of the organic molecule based on the generated parameters.
Resumen de: AU2024266941A1
A computer-implemented method comprising: accessing data related to at least one attribute of at least one item over time; pre-processing the data by encoding the data to provide labelled data; obtaining a set of attribute predictions by applying the labelled data to a combination prediction model, wherein the combination prediction model comprises two or more supervised learning workflows; and determining and displaying a recommended subset of attribute predictions in response to a user selection, wherein the two or more supervised learning workflows comprise: an integrated neural network, and a random forest model. DatabaseServer Client device
Resumen de: KR20250086342A
신경망 모델 기반의 자모 단위 음성 인식 방법은 음성 인식 장치는 사용자의 음성 데이터를 입력받는 단계, 상기 음성 인식 장치는 상기 음성 데이터에서 자모 단위들로 구성된 시퀀스를 추출하는 단계 및 상기 음성 인식 장치는 상기 시퀀스를 학습된 텍스트 병합 모델에 입력하여 자모가 병합된 텍스트를 생성하는 단계를 포함한다.
Resumen de: US2025190797A1
A computer system comprising a processor and a memory storing instructions that, when executed by the processor, cause the computer system to perform a set of operations. The set of operations comprises collecting domain attribute data comprising one or more domain attribute features for a domain, collecting sampled domain profile data comprising one or more domain profile features for the domain and generating, using the domain attribute data and the sampled domain profile data, a domain reputation assignment utilizing a neural network.
Resumen de: US2025190796A1
Computer systems and computer-implemented methods modify a machine learning network, such as a deep neural network, to introduce judgment to the network. A “combining” node is added to the network, to thereby generate a modified network, where activation of the combining node is based, at least in part, on output from a subject node of the network. The computer system then trains the modified network by, for each training data item in a set of training data, performing forward and back propagation computations through the modified network, where the backward propagation computation through the modified network comprises computing estimated partial derivatives of an error function of an objective for the network, except that the combining node selectively blocks back-propagation of estimated partial derivatives to the subject node, even though activation of the combining node is based on the activation of the subject node.
Resumen de: US2025191355A1
Systems and methods for image modification to increase contrast between text and non-text pixels within the image. In one embodiment, an original document image is scaled to a predetermined size for processing by a convolutional neural network. The convolutional neural network identifies a probability that each pixel in the scaled is text and generates a heat map of these probabilities. The heat map is then scaled back to the size of the original document image, and the probabilities in the heat map are used to adjust the intensities of the text and non-text pixels. For positive text, intensities of text pixels are reduced and intensities of non-text pixels are increased in order to increase the contrast of the text against the background of the image. Optical character recognition may then be performed on the contrast-adjusted image.
Resumen de: US2025191695A1
We propose a neural network-implemented method for base calling analytes. The method includes accessing a sequence of per-cycle image patches for a series of sequencing cycles, where pixels in the image patches contain intensity data for associated analytes, and applying three-dimensional (3D) convolutions on the image patches on a sliding convolution window basis such that, in a convolution window, a 3D convolution filter convolves over a plurality of the image patches and produces at least one output feature. The method further includes beginning with output features produced by the 3D convolutions as starting input, applying further convolutions and producing final output features and processing the final output features through an output layer and producing base calls for one or more of the associated analytes to be base called at each of the sequencing cycles.
Resumen de: US2025181897A1
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating output sequences using a non-auto-regressive neural network.
Resumen de: US2025181925A1
A computer-implemented method comprising: accessing data related to at least one attribute of at least one item over time; pre-processing the data by encoding the data to provide labelled data; obtaining a set of attribute predictions by applying the labelled data to a combination prediction model, wherein the combination prediction model comprises two or more supervised learning workflows; and determining and displaying a recommended subset of attribute predictions in response to a user selection, wherein the two or more supervised learning workflows comprise: an integrated neural network, and a random forest model.
Resumen de: WO2025109032A2
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating audio and, optionally, a corresponding image using generative neural networks. For example, a spectrogram of the audio can be generated using a hierarchy of diffusion neural networks.
Resumen de: US2025173567A1
One embodiment provides for a computer-readable medium storing instructions that cause one or more processors to perform operations comprising determining a per-layer scale factor to apply to tensor data associated with layers of a neural network model and converting the tensor data to converted tensor data. The tensor data may be converted from a floating point datatype to a second datatype that is an 8-bit datatype. The instructions further cause the one or more processors to generate an output tensor based on the converted tensor data and the per-layer scale factor.
Resumen de: US2025172933A1
A graph has nodes which represent entities of the industrial system and edges representing relations between the entities of the industrial system. A graph neural network processing the graph calculates a class prediction for at least one entity of the industrial system. A sub-symbolic explainer processes the class prediction to identify edges between nodes and associated features of nodes belonging to a sub-graph within the graph having influenced the class prediction. A large language model, equipped with a plugin for accessing the graph and receiving a prompt including the sub-graph, transforms the sub-graph into a maintenance justification in natural language. A user interface outputs a predictive maintenance alert along with the maintenance justification.
Resumen de: US2025174225A1
A method for detection of a keyword in a continuous stream of audio signal, by using a dilated convolutional neural network, implemented by one or more computers embedded on a device, the dilated convolutional network comprising a plurality of dilation layers, including an input layer and an output layer, each layer of the plurality of dilation layers comprising gated activation units, and skip-connections to the output layer, the dilated convolutional network being configured to generate an output detection signal when a predetermined keyword is present in the continuous stream of audio signal, the generation of the output detection signal being based on a sequence of successive measurements provided to the input layer, each successive measurement of the sequence being measured on a corresponding frame from a sequence of successive frames extracted from the continuous stream of audio signal, at a plurality of successive time steps.
Nº publicación: US2025173563A1 29/05/2025
Solicitante:
IQVIA INC [US]
IQVIA Inc
Resumen de: US2025173563A1
A deep learning model implements continuous, lifelong machine learning (LML) based on a Bayesian neural network using a framework including wide, deep, and prior components that use available real-world healthcare data differently to improve prediction performance. The outputs from each component of the framework are combined to produce a final output that may be utilized as a prior structure when the deep learning model is refreshed with new data in a deep learning process. Lifelong learning is implemented by dynamically integrating present learning from the wide and deep learning components with past learning from models in the prior component into future predictions. The Bayesian deep neural network-based LML model increases accuracy in identifying patient profiles by continuously learning, as new data becomes available, without forgetting prior knowledge.