Absstract of: EP4610891A1
The most fundamental task in ML models is to automate the setting of hyperparameters to optimize performance. Traditionally, in machine learning (ML) models hyperparameter optimization problem has been solved using brute-force techniques such as grid search, and the like. This strategy exponentially increases computation costs and memory overhead. Considering the complexity and variety of the ML models there still remains practical difficulties of selecting right combinations of hyperparameters to maximize performance of the ML models. Embodiments of the present disclosure provide systems and methods for hyperparameters optimization in machine learning models and to effectively reduce the hyperparameter search dimensions and identify the important hyperparameter dimensions that are high variable to identify the best hyperparameter thereby saving the computing energy of machine learning process and eliminate categorical dimensions by using a combination of reduction-iteration techniques.
Absstract of: US2025272582A1
A system and method for feedback-driven automated drug discovery which combines machine learning algorithms with automated research facilities and equipment to make the process of drug discovery more data driven and less reliant on intuitive decision-making by experts. In an embodiment, the system comprises automated research equipment configured to perform automated assays of chemical compounds, a data platform comprising drug databases and an analysis engine, a bioactivity and de novo modules operating on the data platform, and a retrosynthesis system operating on the drug discovery platform, all configured in a feedback loop that drives drug discovery by using the outcome of assays performed on the automated research equipment to feed the bioactivity module and retrosynthesis systems, which identify new molecules for testing by the automated research equipment.
Absstract of: US2025272394A1
An information management system includes one or more client computing devices in communication with a storage manager and a secondary storage computing device. The storage manager manages the primary data of the one or more client computing devices and the secondary storage computing device manages secondary copies of the primary data of the one or more client computing devices. Each client computing device may be configured with a ransomware protection monitoring application that monitors for changes in their primary data. The ransomware protection monitoring application may input the changes detected in the primary data into a machine-learning classifier, where the classifier generates an output indicative of whether a client computing device has been affected by malware and/or ransomware. Using a virtual machine host, a virtual machine copy of an affected client computing device may be instantiated using a secondary copy of primary data of the affected client computing device.
Absstract of: US2025272608A1
System, methods, apparatuses, and computer program products are disclosed for generating and using a hybrid artificial intelligence classifier for classifying input into one or more nodes of a taxonomy. Training data is received for at least a first portion of the taxonomy and used to train a supervised machine learning (ML) model to classify input into the first portion of the taxonomy having training data. A large language model (LLM) taxonomy is determined for at least a second portion of the taxonomy. The hybrid AI classifier classifies input based on a first classification obtained by providing the input to the supervised ML, and a second classification obtained by providing at least the input and the LLM taxonomy to a pre-trained LLM.
Absstract of: US2025269112A1
Methods and systems to validated physiologic waveform reliability and uses thereof are provided. A number of embodiments describe methods to validate waveform reliability, including blood pressure waveforms, electrocardiogram waveforms, and/or any other physiological measurement producing a continuous waveform. Certain embodiments output reliability measurements to closed loop systems that can control infusion rates of cardioactive drugs or other fluids in order to regulate blood pressure, cardiac rate, cardiac contractility, and/or vasomotor tone. Further embodiments allow for waveform evaluators to validate waveform reliability based on at least one waveform feature using data collected from clinical monitors using machine learning algorithms.
Absstract of: US2025272617A1
Some aspects of the present disclosure relate to systems, methods and computer readable media for outputting alerts based on potential violations of predetermined standards of behavior. In one example implementation, a computer implemented method includes: training a natural language-based machine learning model to detect at least one risk of a violation condition in an electronic communication between persons, wherein the violation condition is a potential violation of a first predetermined standard of behavior; receiving a lexicon, wherein the lexicon comprises topic data; receiving connection data representing a relationship between the trained machine learning model and the lexicon; detecting, using the trained machine learning model, the lexicon, and the connection data, a potential violation of a second predetermined standard of behavior; and outputting for display an alert indicating the potential violation of the second predetermined standard of behavior.
Absstract of: AU2023383086A1
Embodiments introduce an approach to semi-automatically generate labels for data based on implementation of a clustering or language model prompting technique and can be used to implement a form of programmatic labeling to accelerate the development of classifiers and other forms of models. The disclosed methodology is particularly helpful in generating labels or annotations for unstructured data. In some embodiments, the disclosed approach may be used with data in the form of text, images, or other form of unstructured data.
Absstract of: WO2025175313A1
In various embodiments, a computing system is configured to provide a multi-stage cascade of large language models and stage N neural networks that identifies matching data records within a set of data records and then merges the matching data records. More specifically, the computing system can use a combination of domain-agnostic large language models and downstream neural network classifiers to identify matching data records that would otherwise not be possible with other machine learning or rules-based entity resolution systems. In one example, a computing system receives an entity resolution request. The entity resolution request can indicate a first entity and a second entity. For example, a data steward may provide the entity resolution request to help determine whether the entities are the same or different.
Absstract of: US2025265479A1
Various embodiments of the teachings herein include a method for creating a knowledge graph in the industrial field. An example includes: obtaining unstructured data from a first source in a sub-field of the industrial field, with knowledge annotations; performing machine learning on the unstructured data to generate a first model adapted to extract knowledge; extracting knowledge from second unstructured data provided by the first source based on the first model, without knowledge annotations; obtaining first structured data and first semi-structured data from a second source in a second sub-field; extracting second knowledge from the first structured data; extracting third knowledge from the first semi-structured data; and building a knowledge graph integrating the first and second sub-field based on the first, second, and third knowledge, represented in the form of triples.
Absstract of: US2025265478A1
An off-policy evaluation system performs episodic off-policy evaluations to perform off-policy evaluation (OPE) for multiple, joint episodes. For a single episode, a first machine learning model outputs a propensity for each action for the user and selects a first action for the user from the set of propensities. For a second episode, a second machine learning model outputs a propensity for each action for the user and selects a first action for the user from the set of propensities. The second machine learning model is evaluated by determining an importance weight for the first model and the second model to determine the inverse propensity score of the second machine learning model.
Absstract of: US2025265546A1
The present disclosure provides systems and methods that may advantageously apply machine learning to accurately manage and predict inventory variables with future uncertainty. In an aspect, the present disclosure provides a system that can receive an inventory dataset comprising a plurality of inventory variables that indicate at least historical (i) inventory levels, (ii) inventory holding costs, (iii) supplier orders, and/or (iv) lead times over time. The plurality of inventory variables can be characterized by having one or more future uncertainty levels. The system can process the inventory dataset using a trained machine learning model to generate a prediction of the plurality inventory variables. The system can provide the processed in inventory dataset to an optimization algorithm. The optimization algorithm can be used to predict a target inventory level for optimizing an inventory holding cost. The optimization algorithm can comprise one or more constraint conditions.
Absstract of: US2025265880A1
A method according to one embodiment includes determining, by a server, a location of a door in an architectural drawing and a room function of a room secured by the door based on an analysis of the architectural drawing, determining, by the server, proper access control hardware to be installed on the door based on the room function, a category of access control hardware, and a predictive machine learning model associated with the category of access control hardware, and generating, by the server, a specification based on the determined proper access control hardware.
Absstract of: EP4604410A1
Provided are a method and apparatus for monitoring a model in beam management by using artificial intelligence and machine learning. The method may include: in relation to a reference signal configured for a terminal, receiving second reference signal resource set configuration information of the reference signal for monitoring an AI/ML model; on the basis of the second reference signal resource set configuration information, measuring signal strength or signal quality for the reference signal; and reporting the performance result of the AI/ML model by comparing a measured value of the reference signal with a predicted value of the reference signal inferred via the AI/ML model.
Absstract of: CN119895449A
Methods, systems, and devices for wireless communication are described. A machine learning server may generate a set of low-dimensional parameters representing training data for the machine learning server, the training data associated with one or more communication environments or one or more channel environments, or a combination thereof. The machine learning server may receive, from one or more devices within a communication environment or a channel environment or both, a set of low-dimensional parameters representing test data associated with the communication environment or the channel environment or both. The machine learning server may generate a reproducibility metric according to a correlation between the set of parameters representing the training data and the set of parameters representing the test data. The machine learning server may send a message indicating the reproducibility metric to the one or more devices, and the one or more devices may perform a communication procedure based on the reproducibility metric.
Absstract of: WO2024081350A1
Provided are systems that include at least one processor to receive a dataset comprising a set of labeled anomaly nodes, a set of unlabeled anomaly nodes, and a set of normal nodes, randomly sample a node to provide a set of randomly sampled nodes, generate a plurality of new nodes based on the set of labeled anomaly nodes and the set of randomly sampled nodes, combine the plurality of new nodes with the set of labeled anomaly nodes to provide a combined set of labeled anomaly nodes, and train a machine learning model based on an embedding of each labeled anomaly node in the combined set of labeled anomaly nodes, a center of the combined set of labeled anomaly nodes in an embedding space, and a center of the set of normal nodes in the embedding space. Methods and computer program products are also disclosed.
Absstract of: WO2025171236A1
Certain aspects of the disclosure provide systems and methods for diagnosis and treatment of suicidal thought and behavior (STB) through reward-aversion judgment and contextual variables. Methods include generating a set of STB parameters associated with a subject, the set of STB parameters based on reward-aversion judgment variables and contextual variables and processing the set of STB parameters with a machine learning model to generate an STB prediction. The subject may then be treated based on the STB prediction.
Absstract of: WO2025171357A1
A distributed generative artificial intelligence (AT) reasoning and action platform that utilizes a cloud-based computing architecture for neuro-symbolic reasoning. The platform comprises systems for distributed computation, curation, marketplace integration, and context management. A distributed computational graph (DCG) orchestrates complex workflows for building and deploying generative Al models, incorporating expert judgment and external data sources. A context computing system aggregates contextual data, while a curation system provides curated responses from trained models. Marketplaces offer data, algorithms, and expert judgment for purchase or integration. The platform enables enterprises to construct user-defined workflows and incorporate trained models into their business processes, leveraging enterprise-specific knowledge. The platform facilitates flexible and scalable integration of machine learning models into software applications, supported by a dynamic and adaptive DCG architecture.
Absstract of: WO2025170921A1
Disclosed are systems and methods for identifying a target molecule by receiving a molecule dataset from a library of molecules comprising chemical-structural representations of candidate molecules for a material application; determining one or more target molecular property values derived from a set of predictive molecular descriptor values determined from a candidate molecule of the molecular dataset applied to a trained machine learning model, wherein the trained ML model is configured to output the set of predictive molecular descriptor values for a given molecule data input, and wherein the trained ML model was trained for a set of training data and the set of molecular descriptors generated from a molecular descriptor modeling application; and outputting, via a report or to a data store, the one or more target molecular property values for each of the molecular dataset.
Absstract of: WO2025170089A1
According to various embodiments of the present disclosure, an operation method for a first node in a wireless communication system is provided, the method comprising the steps of: receiving at least one synchronization signal from a second node; receiving control information from the second node; transmitting first communication environment data to the second node; receiving, from the second node, model information related to a first secondary artificial intelligence/machine learning (AI/ML) model based on a first sub-feature set related to the first communication environment data; transmitting, to the second node, second communication environment data changed from the first communication environment data; and receiving, from the second node, model update information for a second secondary AI/ML model, which is based on a second sub-feature set related to the second communication environment data and is changed from the first secondary AI/ML model.
Absstract of: US2025259077A1
Methods and systems are provided herein for generating optimized, hybrid machine learning models capable of performing tasks such as classification and inference in IoT environments. The models may be deployed as optimized, task-specific (and/or environment-specific) hardware components (e.g., custom chips to perform the machine learning tasks) or lightweight applications that can operate on resource constrained devices. The hybrid models may comprise hybridization modules that integrate output of one or more machine learning models, according to sets of hyperparameters that are refined according to the task and/or environment/sensor data that will be used by the IoT device.
Absstract of: US2025258969A1
The present disclosure relates to systems and methods for manufacturing a battery electrode plate. The system comprises a computing device configured to receive, from the client device, a target process factor among a plurality of process factors associated with manufacturing a battery electrode plate, predict, via a machine-learning model, a change in a characteristic of the battery electrode plate based on a change in a design value of the target process factor, generate information for selecting the target process factor based on predicting the change of the characteristic of the battery electrode plate, and transmit the information to the client device for manufacturing the battery electrode plate.
Absstract of: US2025258990A1
A method includes: training a machine learning model with a plurality of electronic circuit placement layouts; predicting, by the machine learning model, fix rates of design rule check (DRC) violations of a new electronic circuit placement layout; identifying hard-to-fix (HTF) DRC violations among the DRC violations based on the fix rates of the DRC violations of the new electronic circuit placement layout; and fixing, by an engineering change order (ECO) tool, the DRC violations.
Absstract of: US2025259070A1
A system, method, and computer-program product includes obtaining a decisioning dataset comprising a plurality of favorable decisioning records and at least one unfavorable decisioning record; detecting, via a machine learning algorithm, a favorable decisioning record of the plurality of favorable decisioning records that has a vector value closest to a vector value of the unfavorable decisioning record; executing a counterfactual assessment between the favorable decisioning record and the unfavorable decisioning record; generating an explainability artifact based on one or more bias intensity metrics to explain a bias in a machine learning-based decisioning model; and in response to generating the explainability artifact, displaying the explainability artifact in a user interface.
Absstract of: US2025258821A1
Inferences may be obtained to handle access requests at a non-relational database system. An access request may be received at a non-relational database system. The non-relational database system may determine that the access request uses a machine learning model to complete the access request. The non-relational database system may cause an inference to be generated using data items for the access request as input to the machine learning model. The access request may be completed using the generated inference.
Absstract of: US2025258917A1
Apparatuses, systems, and techniques for classifying a candidate uniform resource locator (URL) as a malicious URL using a machine learning (ML) detection system. An integrated circuit is coupled to physical memory of a host device via a host interface. The integrated circuit hosts a hardware-accelerated security service that obtains a snapshot of data stored in the physical memory and extracts a set of features from the snapshot. The security service classifies the candidate URL as a malicious URL using the set of features and outputs an indication of the malicious URL.
Absstract of: US2025258311A1
System, method, and apparatus for classifying fracture quantity and quality of fracturing operation activities during hydraulic fracturing operations, the system comprising: a sensor coupled to a fracking wellhead, circulating fluid line, or standpipe of a well and configured to convert acoustic vibrations in fracking fluid in the fracking wellhead into an electrical signal; a memory configured to store the electrical signal; a converter configured to access the electrical signal from the memory and convert the electrical signal in a window of time into a current frequency domain spectrum; a machine-learning system configured to classify the current frequency domain spectrum, the machine-learning system having been trained on previous frequency domain spectra measured during previous hydraulic fracturing operations and previously classified by the machine-learning system; and a user interface configured to return a classification of the current frequency domain spectrum to an operator of the fracking wellhead.
Absstract of: WO2025168228A1
The present disclosure relates to a stable classification by components (SCBC) data processing architecture, configured to classify input data into one or more classes, comprising: a component detection module configured to compare the input data to a set of detection components, representing data patterns relevant for the classification, and determine a detection probability for each detection component based on the comparison. The SCBC data processing architecture further comprises a probabilistic reasoning module configured to compute one or more class prediction probabilities for the one or more classes based on the determined detection probabilities, a set of class-specific prior probabilities for the determined detection probabilities, and a set of class-specific reasoning probabilities for the determined detection probabilities. Application scenarios include medical and pharmaceutical applications, as well as healthcare in general such as interpretable and secure diagnosis and treatment recommendation systems. Related SCBC data processing system, methods and computer programs are also disclosed, as well as corresponding model training methods and systems.
Absstract of: US2025259727A1
Disclosed is a meal detection and meal size estimation machine learning technology. In some embodiments, the techniques entail applying to a trained multioutput neural network model a set of input features, the set of input features representing glucoregulatory management data, insulin on board, and time of day, the trained multioutput neural network model representing multiple fully connected layers and an output layer formed from first and second branches, the first branch providing a meal detection output and the second branch providing a carbohydrate estimation output; receiving from the meal detection output a meal detection indication; and receiving from the carbohydrate estimation output a meal size estimation.
Absstract of: US2025259735A1
Systems and methods for preprocessing input images in accordance with embodiments of the invention are disclosed. One embodiment includes a method for performing inference based on input data, the method includes receiving a set of real-valued input images and preprocessing the set of real-valued input images by applying a virtual optical dispersion to the set of real-valued input images to produce a set of real-valued output images. The method further includes predicting, using a machine learning model, an output based on the set of real-valued output images, computing a loss based on the predicted output and a true output, and updating the machine learning model based on the loss.
Absstract of: US2025259080A1
Automated computer systems and methods to determine a sentiment of information in digital information or content are disclosed. One aspect includes deriving, by a processor, the digital information from a source; generating, by the processor, a domain-specific machine learning sentiment score, based on the digital information, by one model of at least two machine learning models; autonomously mapping, by the processor, a non-domain specific knowledge graph of associations between elements in a set of digital contextual information; receiving, by the processor, sentiment graphs, each sentiment graph defining a sentiment; generating, by the processor, a graph sentiment score based on the non-domain specific knowledge graph and the sentiment graphs; generating, by the processor, a final sentiment score based on the graph sentiment score and the domain-specific machine learning sentiment score; and determining the sentiment of the information in the digital information or content via the final sentiment score.
Absstract of: US2025259083A1
Systems and techniques that facilitate data diversity visualization and/or quantification for machine learning models are provided. In various embodiments, a processor can access a first dataset and a second dataset, where a machine learning (ML) model is trained on the first dataset. In various instances, the processor can obtain a first set of latent activations generated by the ML model based on the first dataset, and a second set of latent activations generated by the ML model based on the second dataset. In various aspects, the processor can generate a first set of compressed data points based on the first set of latent activations, and a second set of compressed data points based on the second set of latent activations, via dimensionality reduction. In various instances, a diversity component can compute a diversity score based on the first set of compressed data points and second set of compressed data points.
Absstract of: US2025259103A1
The present disclosure describes a patent management system and method for remediating insufficiency of input data for a machine learning system. A prediction to be performed is received from a user input. Relevant input data is determined to perform the prediction. The relevant input data is determined by applying filters based on the prediction to be performed. Prediction is performed by generating a plurality of predicted vectors. A confidence score for the generated plurality of predicted vectors is determined. If the confidence score is less than a predetermined threshold, the prediction is unreliable. The input data is expanded by gathering additional input data. The input data is expanded with the additional input data until the confidence score exceeds the predetermined threshold. A predicted output is generated with the expanded input data. The prediction output and the confidence score are provided for rendering.
Absstract of: WO2025166404A1
This disclosure relates generally to detecting artificial intelligence (AI) implementation in a software application comprising one or more application packages (APs). One or more processors extract one or more AP strings from the software application, which each represent an AP; and create a prompt for a machine learning model, trained to generate output text, comprising the one or more AP strings, the prompt representing instructions to provide a classification and provide functionality information of each of the one or more APs, the classification being AI relevant or non-AI relevant and the functionality information describing a functionality of the respective AP. The one or more processors then evaluate the machine learning model on the prompt to generate output text corresponding to the classification and the functionality information of each of the one or more APs; and generate a report of the AI implementation based on the output text.
Absstract of: US2025259114A1
In general, in one aspect, embodiments relate to a method of producing a sustainable pipeline of pozzolanic materials that includes gathering unstructured and/or structured data publicly available on a network, identifying analytical data of a pozzolanic material using one or more machine learning models, where the analytical data is present within at least the structured data, extracting the analytical data from the structured data, predicting, using one or more predictive models, one or more performance characteristics of the pozzolanic material based at least in part on the analytical data, to form one or more predicted performance characteristics, comparing the predicted one or more performance characteristics to one or more minimum acceptable performance characteristics, storing the extracted analytical data and the one or more predicted performance characteristics in a database if the one or more performance characteristics meets or exceeds the minimum acceptable performance characteristic, and preparing a cement composition that includes the pozzolanic material if the predicted one or more performance characteristics meets or exceeds the one or more minimum acceptable performance characteristics.
Absstract of: US2025259078A1
Disclosed embodiments may provide techniques for extracting hypothetical statements from unstructured data. A computer-implemented method can include accessing input data that includes unstructured data. The computer-implemented method can also include processing the input data using a statement-extraction machine-learning model to generate a plurality of candidate hypothetical statements and summary data associated with the input data. The computer-implemented method can also include constructing one or more filtering prompts for filtering the plurality of candidate hypothetical statements. The computer-implemented method can also include processing the one or more filtering prompts and the plurality of candidate hypothetical statements using the statement-extraction machine-learning model to identify one or more hypothetical statements. In some instances, the one or more hypothetical statements correspond to one or more non-factual assertions associated with the unstructured data. The computer-implemented method can also include transmitting the summary data of the input data and the one or more hypothetical statements.
Absstract of: WO2025163013A1
A computer-implemented method is provided for training a machine learning model to identify one or more network events associated with a network and representing a network security threat. The method comprises: a) obtaining a first dataset comprising data representative of a plurality of network events in a first network; b) obtaining a second dataset comprising data representative of a plurality of network events in a second network; c) performing covariate shift analysis on the first dataset and the second dataset to identify and classify a plurality of differences between the first dataset and the second dataset; d) performing domain adaptation on the first dataset, based on a classified difference, to generate a training dataset; e) training a machine learning model using the training dataset to produce a trained threat detection model.
Absstract of: US2025254189A1
Identifying Internet of Things (IoT) devices with packet flow behavior including by using machine learning models is disclosed. A set of training data associated with a plurality of IoT devices is received. The set of training data includes, for at least some of the exemplary IoT devices, a set of time series features for applications used by the IoT devices. A model is generated, using at least a portion of the received training data. The model is usable to classify a given device.
Absstract of: US2025252112A1
A method and system for training a machine-learning algorithm (MLA) to rank digital documents at an online search platform. The method comprises training the MLA in a first phase for determining past user interactions of a given user with past digital documents based on a first set of training objects including the past digital documents generated by the online search platform in response to the given user having submitted thereto respective past queries. The method further comprises training the MLA in a second phase to determine respective likelihood values of the given user interacting with in-use digital documents based on a second set of training objects including only those past digital documents with which the given user has interacted and respective past queries associated therewith. The MLA may include a Transformer-based learning model, such as a BERT model.
Absstract of: US2025252412A1
A method may include determining a combination of values of attributes represented by reference data associated with payment transaction by training a machine learning model based on an association between (i) respective values of the attributes and (ii) the payment transactions having a given result. The combination may be correlated with having the given result. The method may also include selecting a subset of the payment transactions that is associated with the combination of values. The method may additionally include determining a first rate at which payment transactions of the subset have the given result during a first time period and a second rate at which one or more payment transactions associated with the combination have the given result during a second time period, and generating an indication that the two rates differ.
Absstract of: US2025252341A1
A system includes a hardware processor configured to execute software code to receive interaction data identifying an action and personality profiles corresponding respectively to multiple participant cohorts in the action, generate, using the interaction data, an interaction graph of behaviors of the participant cohorts in the action, simulate, using a behavior model, participation of each of the participant cohorts in the action to provide a predicted interaction graph, and compare the predicted and generated interaction graphs to identify a similarity score for the predicted interaction graph relative to the generated interaction graph. When the similarity score satisfies a similarity criterion, the software code is executed to train, using the behavior model, an artificial intelligence character for interactions. When the similarity score fails to satisfy the similarity criterion, the software code is executed to modify the behavior model based on one or more differences between the predicted and generated interaction graphs.
Absstract of: US2025252338A1
Certain aspects of the present disclosure provide techniques and apparatus for improved machine learning. In an example method, a current program state comprising a set of program instructions is accessed. A next program instruction is generated using a search operation, comprising generating a probability of the next program instruction based on processing the current program state and the next program instruction using a machine learning model, and generating a value of the next program instruction based on processing the current program state, the next program instruction, and a set of alternative outcomes using the machine learning model. An updated program state is generated based on adding the next program instruction to the set of program instructions.
Absstract of: WO2025160650A1
Described are various embodiments of system and method for monitoring and optimizing player engagement. In some embodiments, the computer-implemented method comprises generating, on a server, a storage layer in the form of a graph drawn according to a schema description of objects and relationships in a virtual game environment. The server produces, from the received schema and learning system objectives, one or more instructions. The instructions are transmitted to and applied by a gaming device configured to execute a designated interactive software program, to produce from raw data generated one or more embeddings. The embeddings are stored in the graph, and retrieved to perform one or more data analysis tasks on the designated embeddings by one or more machine learning algorithms. The embeddings can be augmented or optimized into contextualized preferences embeddings or contextualized timeline embeddings, to allow better contextual learning and predictive outputs.
Absstract of: WO2025163523A1
Method and servers for determining optimized device parameters of an electronic circuit. The method includes accessing design information of the electronic circuit, determining a set of device parameters of the electronic circuit based on the design information, receiving information indicative of set of performances-of-interest to be optimized, and a corresponding set of target values, each target value being associated with a corresponding performance-of- interest, defining a multi-objective reward function based on the set of performances-of-interest to be optimized and outputting, using a pre-built Machine Learning (ML) algorithm interacting with an electronic design automation (EDA) environment, an optimized device parameter value for each of the device parameters based on the multi-objective reward function.
Absstract of: WO2025164720A1
A model generation device according to one aspect of the present disclosure acquires eyeball-related data measured from a subject who is viewing content, uses the acquired eyeball-related data to perform machine learning of an inference model, and outputs the results of the machine learning. The machine learning includes training the inference model to acquire, from the eyeball-related data, the ability to infer a semantic representation in an information space corresponding to content included in the viewed content. Thus, the present disclosure provides a technique for easily inferring, at low cost, content perceived by an individual while viewing content.
Absstract of: WO2025165604A1
The method may include inputting a first set of data into a first model; for each user in the second group, generating a first similarity score; generating a relevance score for each parameter; determining a subset of parameters based on relevance; inputting the subset of parameters, a second set of data, and a third set of data into a second model; generating a space-partitioning data structure based on the second set of data; for each user in the first group, determining a feature distance between a representation of the user in the first group and a representation of a user in the second group based on the third set of data and the space-partitioning data structure; for each user in the second group, generating a second similarity score; and for each user in the second group, generating an overall similarity score.
Absstract of: WO2025165854A1
A method can include receiving by a computational device at a wellsite, real-time, time series data from a pump system operating at the wellsite, where the wellsite includes a wellbore in contact with a fluid reservoir; using the computational device, processing a portion of the time series data to generate feature values as input to a trained machine learning model to detect pump system behavior indicative of a forthcoming performance issue of the pump system; and issuing a signal responsive to detection of the pump system behavior to mitigate the forthcoming performance issue of the pump system.
Absstract of: WO2025166095A1
Methods, systems, and computer program products are provided for determining feature importance using Shapley values associated with a machine learning model. An example method includes training a classification machine learning model, performing a plurality of feature ablation procedures on the classification machine learning model using a plurality of features to provide a distribution of feature ablation outcomes, training an explainer neural network machine learning model based on the distribution of the feature ablation outcomes to provide a trained explainer neural network machine learning model, wherein the explainer neural network machine learning model is configured to provide an output that comprises a prediction of a Shapley value associated with a feature, and determining one or more Shapley values of an input feature using the explainer neural network machine learning model.
Absstract of: WO2025165133A1
The disclosure relates to a 5G or 6G communication system for supporting a higher data transmission rate. The present disclosure provides a method and system for discovering an Artificial Intelligence/ Machine Learning (AI/ML) model for transfer learning. receiving, from a second entity, a first request message for requesting to store machine learning (ML) model information; verifying whether the second entity is authorized to store the ML model, wherein the ML model is identified by the analytics ID included in the first request message; storing the ML model information, based on the request message and a result of the verification for the second entity; and transmitting, to the second entity, a first response message, as a response of the request message, for indicating an identifier for the ML model.
Absstract of: EP4597360A1
A system includes a hardware processor configured to execute software code to receive interaction data identifying an action and personality profiles corresponding respectively to multiple participant cohorts in the action, generate, using the interaction data, an interaction graph of behaviors of the participant cohorts in the action, simulate, using a behavior model, participation of each of the participant cohorts in the action to provide a predicted interaction graph, and compare the predicted and generated interaction graphs to identify a similarity score for the predicted interaction graph relative to the generated interaction graph. When the similarity score satisfies a similarity criterion, the software code is executed to train, using the behavior model, an artificial intelligence character for interactions. When the similarity score fails to satisfy the similarity criterion, the software code is executed to modify the behavior model based on one or more differences between the predicted and generated interaction graphs.
Absstract of: EP4596425A1
The present disclosure provides techniques for dynamic utilization of aircraft based on environmental conditions. A proposed flight plan for an aircraft is received. Environment data representing a set of environmental conditions at a source airport indicated in the proposed flight plan is collected. Weather data representing a set of environmental conditions at a destination airport indicated in the proposed flight plan is collected. Operation data related to the aircraft indicated in the proposed flight plan is received. Aircraft engine degradation of the aircraft is dynamically simulated based on the collected environment data and the received operation data using a trained machine learning (ML) model. The simulated aircraft engine degradation is output.
Absstract of: EP4597317A1
Disclosed are an optimization method for distributed execution of a deep learning task and a distributed system. The method includes that: a computation graph is generated based on a deep learning task and hardware resources are allocated for the distributed execution of the deep learning task; the allocated hardware resources are grouped to obtain at least one grouping scheme; for each grouping scheme, tensor information related to multiple operators contained in the computation graph is split based on the value of at least one factor under this grouping scheme to obtain multiple candidate splitting solutions; and an optimal efficiency solution for executing the deep learning task of the hardware resources is selected by using a cost model. Through operator splitting based on device grouping combined with optimization solving based on the cost model, automatic optimization of distributed execution for various deep learning tasks is realized. Furthermore, computation graph partitioning based on grouping can be introduced, and the solving space can be restricted according to different levels of optimization, thereby generating a distributed execution solution of required optimization level within controllable time.
Absstract of: WO2024073382A1
Dynamic timers are determined using machine learning. The timers are used to control the amount of time that new data transaction requests wait before being processed by a data transaction processing system. The timers are adjusted based on changing conditions within the data transaction processing system. The dynamic timers may be determined using machine learning inference based on feature values calculated as a result of the changing conditions.
Absstract of: GB2637669A
Performing predictive inferences on a first natural language document having first sentences 611 and a second natural language document having second sentences 612, wherein, for each sentences from the first and second sentences a sentence embedding is generated using a sentence embedding machine learning model 601, the embedding model being generated by updating parameters of an initial embedding model, preferably pretrained, based on a similarity determination model error measure, for each sentence pair comprising first and second sentences determining, using the similarity determination machine learning model 602 and sentence embedding for each sentence 621, 622, an inferred similarity measure 631, for each similarity measure a predictive output is generated and prediction-based actions are performed based on the output. The predictive output may comprise generating a cross-document relationship graph with nodes and edges representing relationships between sentences. The similarity determination model error measure can be based on a deviation measure and a ground-truth similarity measure for a training sentence pair. The first document data object is preferably a user-provided query and the predictive output a search result.
Absstract of: GB2637695A
A combined hyperparameter and proxy model tuning method is described. The method involves iterations for hyperparameters search 102. In each search iteration, candidate hyperparameters are considered. An initial (‘seed’) hyperparameter is determined by initialization function 110, and used to train (104) one or more first proxy models on a target dataset 101. From the first proxy model(s), one or more first synthetic datasets are sampled using sampling function 108. A first evaluation model is fitted to each first synthetic dataset, for each candidate hyperparameter, by applying fit function 106 enabling each candidate hyperparameter from hyperparameter generator 112 to be scored. Based on the respective scores assigned to the candidate hyperparameters, a candidate hyperparameter is selected and used (103) to train one or more second proxy models on the target dataset. Hyperparameter search may be random, grid and Bayesian. Scores by scoring function 114 can be F1 scores. Uses include generative causal model with neural network architectures.
Absstract of: EP4597929A1
A computer-implemented method is provided for training a machine learning model to identify one or more network events associated with a network and representing a network security threat. The method comprises: a) obtaining a first dataset comprising data representative of a plurality of network events in a first network; b) obtaining a second dataset comprising data representative of a plurality of network events in a second network; c) performing covariate shift analysis on the first dataset and the second dataset to identify and classify a plurality of differences between the first dataset and the second dataset; d) performing domain adaptation on the first dataset, based on a classified difference, to generate a training dataset; e) training a machine learning model using the training dataset to produce a trained threat detection model. In this way,
Absstract of: CN119949012A
An apparatus for wireless communication by a first wireless local area network (WLAN) device has a memory and one or more processors coupled to the memory. The processor is configured to transmit a first message indicating support of the first WLAN device for machine learning. The processor is also configured to receive a second message from a second WLAN device. The second message indicates support of the second WLAN device for one or more machine learning model types. The processor is configured to activate a machine learning session with the second WLAN device based at least in part on the second message. The processor is further configured to receive machine learning model structure information and machine learning model parameters from the second WLAN device during the machine learning session.
Nº publicación: EP4598100A1 06/08/2025
Applicant:
LG ELECTRONICS INC [KR]
LG Electronics Inc
Absstract of: EP4598100A1
A method performed by a device supporting artificial intelligence/machine learning (AI/ML) in a wireless communication system, according to at least one of embodiments disclosed in the present specification, comprises: receiving a configuration for an AI/ML model from a network; performing monitoring on performance of the AI/ML model on the basis of outputs from the AI/ML model; and performing AI/ML model management of maintaining the AI/ML model or at least partially changing the AI/ML model on the basis of the monitoring of the performance of the AI/ML model, wherein the monitoring of the performance of the AI/ML model may comprise first monitoring for monitoring performance of one or two or more intermediate outputs obtained before a final output from the AI/ML model, and second monitoring for monitoring performance of the final output obtained on the basis of the one or two or more intermediate outputs.