Resumen de: US20260045348A1
Examples described herein generally relate to recommending drug dosage reductions for a patient. A computer system may generate an initial non-linear glide path of recommended dosages starting at an initial dosage of a drug for a patient and ending at a goal dosage at an estimated time of arrival. The system may receive periodic patient monitoring including at least one drug withdrawal scale score, anxiety scale score, and indicated side effect. The system may determine, using one or more machine learning algorithms, a revised glide path based on a data record for the patient, the at least the drug withdrawal scale score and the at least one anxiety scale score for the patient. The system may recommend at least one medication or therapy for the indicated side effect. The system may determine a prescription adjustment based on the revised glide path.
Resumen de: US20260042011A1
The disclosed concepts relate to training a machine learning model to provide help sessions during a video game. For instance, prior video game data from help sessions provided by human users can be filtered to obtain training data. Then, a machine learning model can be trained using approaches such as imitation learning, reinforcement learning, and/or tuning of a generative model to perform help sessions. Then, the trained machine learning model can be employed at inference time to provide help sessions to video game players.
Resumen de: WO2026033326A1
An apparatus including at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the apparatus at least to: transmit, to a network entity, a configuration used when at least one model is trained; wherein the at least one model is an artificial intelligence or machine learning model; and receive, from the network entity, information related to a consistency between the configuration used when the at least one model is trained and a configuration used when the at least one model is to be applied during inference.
Resumen de: WO2026035512A1
A network device (PRU, WTRU) may receive a request to collect data for artificial intelligence or machine learning (AI/ML) positioning model training, for example from a network data analytics function (NWDAF) and/or from a model training logical function (MTLF) (450b). The request may include an indication of an area of interest, a time window associated with the data for AI/ML positioning model training, a requested number of data samples of the data for AI/ML positioning model training, and/or a data source type of the data for AI/ML positioning model training. The network device may receive the data for AI/ML positioning model training and/or receive location data associated with the one or more WTRUs. The network device may send the location data and the data for AI/ML positioning model training to the NWDAF or the MTLF (485, 495).
Resumen de: WO2026035335A1
Certain aspects of the present disclosure provide techniques and apparatus for machine learning. In an example method, a machine learning model comprising a plurality of layers, and a set of input data for the machine learning model, are accessed. A combination of hyperparameters for the machine learning model is selected based on the set of input data, comprising selecting, for each respective layer of the plurality of layers, a respective cache size based on the input data. The machine learning model is deployed according to the combination of hyperparameters.
Resumen de: WO2026035375A1
Aspects of the disclosure are directed to a (e.g., capability-based window) configuration for a reference signal receive (RS-Rx) resource-based processing task associated with an artificial intelligence machine learning (AIML) model. In an aspect, the RS-Rx resource-based processing task may be related to sensing or positioning or another task type (e.g., beam management, channel state information (CSI) operations, etc.). In an aspect, the RS-Rx task may be associated with any type of RS-Rx resource relative to the UE (e.g., downlink reference signals, sidelink reference signals, etc.). Such aspects may provide various technical advantages, such as AIML processing window configurations that are configured based on AIML model-specific capabilit(ies) of the UE, which may improve functionalities associated with the AIML model (e.g., improved sensing or positioning or beam management, etc.) and/or improved AIML model monitoring.
Resumen de: WO2026032684A1
Disclosed are devices, methods, apparatuses, and computer readable media for fallback of machine learning functionality An example apparatus for a terminal device may include at least one processor and at least one memory. The at least one memory may store instructions that, when executed by the at least one processor, may cause the apparatus at least to: receive from a network, at least one first configuration for a machine learning functionality of a determined network function, and a second configuration for a non-machine learning functionality of the determined network function, wherein the second configuration is a fallback configuration from the first configuration; receive from the network, a first indication indicating the terminal device to activate fallback from the machine learning functionality; and in response to the first indication, apply modifications to the first configuration for use during fallback, and enable the second configuration in the network function.
Resumen de: US20260043656A1
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for determining elements of a shipping network. One of the methods includes obtaining environmental input data, wherein the environmental input data includes weather forecast data; providing the environmental input data to a circulation model; and providing output environmental condition from the circulation model to a machine learning model trained to generate a route for a ship.
Resumen de: WO2026034877A1
The present invention relates to a method by which a terminal selects a beam to be reported in machine learning-based beam management, the method comprising the steps of: receiving, from a base station, configuration information of a measurement resource set and M number of report beams for AI/ML inference; determining, on the basis of measurement values of the measured beams, a beam to be reported; and transmitting the determined beam information to the base station, wherein, when the number of candidate beams to be reported exceeds M due to tie beams having the same or similar measurement values, the final beams to be reported are determined by excluding at least one from among same through a tie beam processing operation.
Resumen de: US20260044798A1
A case assistant is provided to client support professionals, which utilizes robotic process automation (RPA) technologies to analyze large amounts of data related to historical client cases that are similar to current open cases, data related to skilled experts associated with similar client cases, and data related to business exceptions. Several processes are utilized to provide this data to client support professionals, including a document similarity finder that utilizes a vector data collector, a tokenizer, a stop word remover, a relevance finder, and a similarity finder, several of which utilize a variety of machine learning technologies. Additional processes include a skilled experts finder and a business exceptions finder.
Resumen de: US20260044745A1
Certain aspects of the present disclosure provide techniques and apparatus for machine learning. In an example method, a machine learning model comprising a plurality of layers, and a set of input data for the machine learning model, are accessed. A combination of hyperparameters for the machine learning model is selected based on the set of input data, comprising selecting, for each respective layer of the plurality of layers, a respective cache size based on the input data. The machine learning model is deployed according to the combination of hyperparameters.
Resumen de: US20260044690A1
Disclosed are various embodiments for automated translations for autonomous chat agents. A build service can send a translation request to a machine translation service, the translation request comprising training data in a first language and the translation request specifying a second language. The build service can then receive translated training data from the machine translation service, the translated training data having been translated from the training data into the second language. Next, the build service can create a translated workflow that comprises a translated machine learning model and a translated intent. Subsequently, the build service can add the translated training data to the translated workflow and train the translated machine learning model using the translated training data.
Resumen de: WO2026035326A1
The disclosed concepts relate to training a machine learning model to provide help sessions during a video game. For instance, prior video game data from help sessions provided by human users can be filtered to obtain training data. Then, a machine learning model can be trained using approaches such as imitation learning, reinforcement learning, and/or tuning of a generative model to perform help sessions. Then, the trained machine learning model can be employed at inference time to provide help sessions to video game players.
Resumen de: US20260044803A1
A method can include receiving input data comprising a plurality of features for a plurality of users. A method can including providing the input data to a risk prediction model configured to predict a termination likelihood for each user. In some implementations, the risk prediction model can be a random forest model. A method can include identifying, based on the predicted termination likelihood for each user, an at risk population including users with a termination risk above a threshold amount. A method can include determining, for each user of the at risk population, a profile type of a plurality of profile types. The profile type can describe certain attributes of the user. In some implementations, an end user can select a profile type. A method can include outputting members of the at risk population having the selected profile type.
Resumen de: CN120898407A
Embodiments of the present disclosure provide machine learning model feature selection in a communication network. The method includes, in response to a feature selection trigger of a first machine learning model, determining a target input feature set for an analysis task based on contextual information related to the analysis task, the first machine learning model being currently provisioned for performing the analysis task based on a current input feature set, the current input feature set is different from the target input feature set; and causing a second machine learning model to be provisioned to perform an analysis task based on the determined set of target input features. In this manner, the machine learning model may be supplied with an optimized set of input features that is applicable to the current network context and provides an acceptable level of model performance.
Resumen de: EP4694428A1
A method performed by a first device in a wireless communication system, according to at least one embodiment among the embodiments disclosed in the present specification, comprises: receiving, from a second device, one or two or more data sets related to positioning; training an artificial intelligence/machine learning (AI/ML) model on the basis of at least a portion of the one or two or more data sets; and acquiring positioning information outputted from the trained AI/ML model, wherein data label-related information is given to each of the received one or two or more data sets, and the data label-related information may include positioning-related actual measurement information and information related to the quality of the actual measurement information.
Resumen de: WO2024211680A1
A device, a method, a system and one or more computer-readable media. A first example device is to host a management service (MnS) producer for a wireless cellular network. One or more processors of the first device are to receive, from an MnS consumer, a request to perform AI/ML emulation in one or more available machine learning (ML) emulation environments; and send to the MnS consumer one or more instances of an information object class (IOC) associated with the process of the AI/ML emulation. A second example device is to host an MnS consumer. One or more processors of the second device are to send, to an MnS producer, a request to perform AI/ML emulation in one or more available machine learning (ML) emulation environments; and receive, from the MnS producer one or more instances of an information object class (IOC) associated with the process of the AI/ML emulation.
Resumen de: EP4693123A1
A biomass utilization support device: acquires biomass information relating to a biobased material and product information for each of a plurality of products including information about materials configuring the products; uses a machine learning model, which has been trained to estimate appropriate values for replacement amounts in a case of replacing a portion of the materials configuring the products with the biobased material, and the acquired biomass information and product information to estimate the appropriate values for each of the plurality of products; calculates, for each of the plurality of products, environmental impact indicators in a case in which a portion of the materials configuring the products has been replaced with the biobased material at the replacement amounts represented by the estimated appropriate values; and outputs support information listing the estimated appropriate values and the calculated environmental impact indicators.
Resumen de: EP4693331A1
This learning model generation device 10 is equipped with a learning model generation unit 11 which, when a function expressing a change in an inspection value obtained by inspecting a person is set, generates a learning model in which the inspection value is the explanatory variable and the parameter is the objective variable, by performing machine learning using inspection values of sample people and parameters of the function for the sample people as training data.
Resumen de: EP4693046A1
Systems, computer program products, and methods are described for resource allocation in a hybrid distributed computational environment. An example system segments a received task into multiple sub-tasks. Upon partitioning the task, each sub-task is assigned to the appropriate computational resource (e.g., CPU, GPU, or QPU), enabling parallel execution of multiple sub-tasks. Both task partitioning and computational resource determination is determined using a machine learning model. Additionally, the machine learning model may continuously monitor the execution of each sub-task by receiving resource utilization information and performance metrics associated with the execution of each sub-task. The resource utilization information and performance metrics may then be used to update the machine learning model.
Resumen de: WO2026029823A1
This disclosure describes a framework for performing user-requested tasks automatically across an interactive interface using various types of machine learning models. Specifically, this disclosure outlines and describes a task execution system that utilizes a generative artificial intelligence (AI) action model and retrieval-augmented generation (RAG) to complete user-requested actions across an interactive interface. The task execution system solves many of the current limitations of LAMs by using a generative AI action model to determine a session plan, which includes a set of actions for accomplishing stages of the actionable task across the interactive interface, obtaining visual context information of each interactive interface segment, integrates RAG results to improve the accuracy of both the session plan and individual actions, and self-corrects when faced with unexpected obstacles.
Resumen de: US20260037842A1
A Contrastive Forecasting Explanation (CFE) tool and technique provides a model-agnostic approach to forecasting explanation. The CFE tool uses an ML-based surrogate forecaster as a surrogate model. The surrogate forecaster includes a time series preprocessor, a simple concept generator, and an ML forecaster. The subsequent interpretation of the predictions of the time series forecaster is based on the behavior of the surrogate forecaster. The CFE tool interprets time series forecasts by identifying the specific temporal concepts impacting predictions and thus generates clear and reliable explanations regardless of model type. The simple concepts and predictions generated by the surrogate model are input into a perturbation-based explainer to produce feature attributions from the surrogate model. An attribution postprocessor aggregates the attributions into more coherent concepts to present a coherent, concise, and interpretable explanation.
Resumen de: WO2026030336A1
A system may access a set of training data and determine a timeframe associated with a positively labeled data item of the training data. A system may generate at least two new positively labeled data items based on the positively labeled data item to generate augmented training data. A system may train a machine learning model by applying the augmented training data as input to a machine learning model, and modifying a weight of the machine learning model.
Resumen de: WO2026030330A1
Techniques are disclosed herein for providing and using a natural language to logical form model having execution and sematic error correction capabilities. In one aspect, a method is disclosed that includes: accessing a set of training examples and generating a set of error correction training examples via an iterative process performed for each training example. The iterative process includes generating an inferred logical form, executing the inferred logical form on a database, when executing the inferred logical form on the database fails, obtaining an execution error message corresponding to the failure, and recording the inferred logical form and the execution error message as part of an execution error example, and populating an error correction prompt template with the execution error example to generate an error correction training example. A machine learning model may then be trained with at least the set of error correction training examples.
Nº publicación: WO2026030526A1 05/02/2026
Solicitante:
MASSACHUSETTS INSTITUTE OF TECH [US]
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
Resumen de: WO2026030526A1
Quantum-secure, multiparty computation enables the joint evaluation of multivariate functions across distributed users while ensuring the privacy of their local inputs. It uses a linear algebra engine that leverages the quantum nature of light for information-theoretically secure multiparty inference using telecommunication components. This linear algebra engine can perform deep learning inference with rigorous upper bounds on the information leakage of both the deep learning model weights and the client's data, enabling double-blind operations. Applied to the MNIST classification task, it performs with classification accuracies exceeding 95% and a leakage of less than 0.1 bit per weight and data symbols. This leakage is an order of magnitude below the minimum bit precision for accurate deep learning using state-of-the- art quantization techniques. Our quantum-secure, multiparty computation lays the foundation for practical quantum-secure computation and unlocks secure cloud deep learning.