Resumen de: US2025148283A1
A system for broad area geospatial object detection includes a processor configured to retrieve training data including a first plurality of orthorectified geospatial training images each including at least one labeled instance of the object of interest, and a second plurality of orthorectified geospatial images each including at least one labeled instance of the object of interest and/or at least one unlabeled instance of the object of interest, and apply at least one type of image correction to the training data. The processor is also configured to train a plurality of machine learning classifier elements, based on the first plurality of orthorectified geospatial training images and subsequently based on the second plurality of orthorectified geospatial images, each of the plurality of machine learning classifier elements being defined by a machine learning protocol parameterized based on one or more visually unique features of the object of interest.
Resumen de: US2025148319A1
A method includes receiving first input values for a first parameter of a physical system, calculating first modeled values for a second parameter using a model that represents the physical system, based on the first input values, receiving measured values for the second parameter, training a machine learning model to adjust modeled values generated by the model based on a difference between the first modeled values and the measured values, receiving second input values for the first parameter, calculating second modeled values for the second parameter using the model, generating adjusted values for the second parameter by adjusting the second modeled values using the trained machine learning model, and visualizing the adjusted values for the second parameter as representing operation of the physical system.
Resumen de: US2025148321A1
A computing device for predicting an expected loss for a set of claim transactions is provided. The computing device predicts, at a first machine learning model, a claim frequency of the set of claim transactions over a given time period and trained using historical frequency data and based on a segment type defining a type of claim, each type of segment having peril types. The computing device also predicts, at a second machine learning model, claim severity of the set of claim transactions during the given time period, the second machine learning model trained using historical severity data and based on the segment type and the corresponding peril types. The computing device then determines the expected loss for the set of claim transactions over the given time period by applying a product of prediction of the first machine learning model and the second machine learning model.
Resumen de: US2025148307A1
A system and method are presented in this invention for a machine learning model configured to generate recommendations to improve the probability of a desired outcome for participants in a program. Individual profile data for past and current participants, which includes participant attributes derived internal and external to the program, are used to conduct a series of assessments of the participant population to determine the probability of participants to achieve the desired outcome(s). Inputs to the assessments include the output(s) of the previous assessment(s) conducted in the series. Past participants with the undesired and desired results are assessed to identify detrimental and beneficial impactors to teach the system about the specific participant population(s) based on the plurality of attributes within the participant profiles. Individually tailored recommendations are automatically generated for current participants as a function of the identified impactors and tracked to further refine future recommendations generated by the system.
Resumen de: US2025148301A1
Methods and servers for of training a decision-tree based Machine Learning Algorithm (MLA) are disclosed. During a given training iteration, the method includes generating prediction values using current generated trees, generating estimated gradient values by applying a non-convex loss function, generating a first plurality of noisy estimated gradient values based on the estimated gradient values, generating a plurality of noisy candidate trees using the first plurality of noisy estimated gradient values, applying a selection metric to select a target tree amongst the plurality of noisy candidate trees, generating a second plurality of noisy estimated gradient values based on the plurality of estimated gradient values, generating an iteration-specific tree based on the target tree and the second plurality of noisy estimated gradient values, and storing, the iteration-specific tree to be used in combination with the current generated trees.
Resumen de: US2025148305A1
Embodiments receive ride data, user data which comprises user information and other user data, and crowd-sourced historical data, train a machine learning model using a knowledge corpus which includes the ride data, the user data, and the crowd-sourced historical data, and dynamically adjust at least one ride recommendation based on the trained machine learning model.
Resumen de: US2025148213A1
A natural language understanding (NLU) framework includes a concept system that performs concept matching of user utterances. The concept system generates a concept cluster model from sample utterances of an intent-entity model, and then trains a machine learning (ML) concept model based on the concept cluster model. Once trained, the concept model receives semantic vectors representing potential concepts extracted from utterances, and provides concept indicators to an ensemble scoring system. These concept indicators include indications of which concepts of the concept model that matched to the potential concepts, which intents of the intent-entity model are related to these concepts, and concept-relationship scores indicating a strength and/or uniqueness of the relationship between each concept-intent combination. Based on these concept-related indicators, the ensemble scoring system may determine and apply an ensemble scoring adjustment when determining an ensemble artifact score for each of the artifacts extracted from an utterance.
Resumen de: US2025147851A1
Systems and techniques for multi-phase cloud service node error prediction are described herein. A set of spatial metrics and a set of temporal metrics may be obtained for node devices in a cloud computing platform. The node devices may be evaluated using a spatial machine learning model and a temporal machine learning model to create a spatial output and a temporal output. One or more potentially faulty nodes may be determined based on an evaluation of the spatial output and the temporal output using a ranking model. The one or more potentially faulty nodes may be a subset of the node devices. One or more migration source nodes may be identified from one or more potentially faulty nodes. The one or more migration source nodes may be identified by minimization of a cost of false positive and false negative node detection.
Resumen de: US2025147753A1
In accordance with example embodiments of the invention there is at least a method and apparatus to perform executing a machine learning inference loop of a currently deployed or stored at least one machine learning model, wherein the currently deployed or stored at least one machine learning model is identified based on a manifest file received from a communication network; based on determined factors, requesting from the communication network a model update to trigger the model update for use with the currently deployed or stored at least one machine learning model; based on the request, receiving information from the communication network comprising the model update; and based on the information, performing a model update to update the currently deployed or stored at least one machine learning model. Further, receiving, based on determined factors, from a user equipment a communication to trigger a machine learning model update for use with a currently deployed or stored at least one machine learning model at the user equipment; based on the communication, determining information comprising the model update; based on the determining, sending towards the client the information comprising the model update for a model update to update the currently deployed or stored at least one machine learning model.
Resumen de: US2025150457A1
A system, method and tangible non-transitory storage medium are disclosed. The system includes an integration platform configured to generate in interactive graphical user interface (GUI) that simultaneously displays and provides access to a combination of services, internal resources and external resources. Responsive to receiving input from a user device, the interactive GUI provide access to one or more selected services, internal resources and/or external resources. The integration platform may also monitor and capture interaction data associated with activity between the user device and the integration platform, execute machine learning model(s) to predict user-specific interaction tendencies, and revise one or more aspects of interactive GUI based on the predicted user-specific interaction tendencies.
Resumen de: WO2025093915A1
A machine learning platform operating at a server is described. The machine learning platform accesses a dataset from a datastore. A task that identifies a target of a machine learning algorithm from the machine learning platform is defined. The machine learning algorithm forms a machine learning model based on the dataset and the task. The machine learning platform deploys the machine learning model and monitors a performance of the machine learning model after deployment. The machine learning platform updates the machine learning model based on the monitoring.
Resumen de: WO2025093940A1
A computer-implemented method, system, and computer program product for maximizing bandwidth utilization of PCIe links. The bandwidth utilization of a PCIe link involving a PCIe card is measured. A bandwidth utilization of the PCIe link at a future time is predicted based on the measured bandwidth utilization of the PCIe link using a machine learning model trained to predict bandwidth utilization of PCIe links. If the predicted bandwidth utilization of the PCIe link exceeds a threshold value, then the PCIe card is configured to implement a first mode of operation that utilizes more bandwidth if not implementing the first mode of operation at the future time. If the predicted bandwidth utilization of the PCIe link does not exceed a threshold value, then the PCIe card is configured to implement a second mode of operation that utilizes less bandwidth if not implementing the second mode of operation at the future time.
Resumen de: WO2025096084A1
A computing system for detecting patterns in data transmitted over a network is provided. The computing system includes a model engine configured to receive an initial dataset including historical data for a first time period, and segment the initial dataset into a plurality of subsets, each subset associated with a second time period smaller than the first time period. The model engine is further configured to train a machine learning model on each subset separately, receive a candidate dataset, analyze the candidate dataset using the trained machine learning model, and assign a score to the candidate dataset based on the analysis. The computing system further includes a rules engine configured to receive the candidate dataset and the corresponding score from the model engine, and generate and output, based at least in part on the score, a decision regarding the candidate dataset.
Resumen de: WO2025097173A2
A system and method for data compression with protocol adaptation, that utilizes a codebook generator which leverages one or more machine/ deep learning algorithms trained on at least a plurality of protocol policies in order to generate a protocol appendix and codebook, wherein original data is encoded by an encoder according to the codebook and sent to a decoder, but instead of just decoding the data according to the codebook to reconstruct the original data, data manipulation rules such as mapping and transformation are applied at the decoding stage to transform the decoded data into protocol formatted data.
Nº publicación: WO2025096822A1 08/05/2025
Solicitante:
NARRATIZE INC [US]
NARRATIZE, INC
Resumen de: WO2025096822A1
Certain aspects of the present disclosure provide techniques for narrative creation. A method generally includes receiving a selection of a first narrative type for generation, obtaining: a plurality of user responses to a plurality of prompts associated with the first narrative type; and at least one of: one or more stories from one or more users stored in a repository; or one or more insights associated with one or more documents stored in the repository, and processing, by one or more machine learning (ML) models, the plurality of user responses and at least one of the one or more stories or the one or more insights to generate an output associated with the first narrative type.