ultraopt package

Subpackages

Submodules

ultraopt.constants module

class ultraopt.constants.Configs[源代码]

基类:object

AutoAdjustFractionalBudget = True
FractionalBudget = True

ultraopt.result module

class ultraopt.result.Result(HB_iteration_data, HB_config)[源代码]

基类:object

Object returned by the HB_master.run function

This class offers a simple API to access the information from a Hyperband run.

static from_dict(data, HB_config)[源代码]
get_all_runs(only_largest_budget=False)[源代码]

returns all runs performed

参数

only_largest_budget (boolean) – if True, only the largest budget for each configuration is returned. This makes sense if the runs are continued across budgets and the info field contains the information you care about. If False, all runs of a configuration are returned

get_fANOVA_data(config_space, budgets=None, loss_fn=<function Result.<lambda>>, failed_loss=None)[源代码]
get_id2config_mapping()[源代码]

returns a dict where the keys are the config_ids and the values are the actual configurations

get_incumbent_id()[源代码]

Find the config_id of the incumbent.

The incumbent here is the configuration with the smallest loss among all runs on the maximum budget! If no run finishes on the maximum budget, None is returned!

get_incumbent_trajectory(all_budgets=True, bigger_is_better=True, non_decreasing_budget=True)[源代码]

Returns the best configurations over time

参数
  • all_budgets (bool) – If set to true all runs (even those not with the largest budget) can be the incumbent. Otherwise, only full budget runs are considered

  • bigger_is_better (bool) – flag whether an evaluate on a larger budget is always considered better. If True, the incumbent might increase for the first evaluations on a bigger budget

  • non_decreasing_budget (bool) – flag whether the budget of a new incumbent should be at least as big as the one for the current incumbent.

返回

dictionary with all the config IDs, the times the runs finished, their respective budgets, and corresponding losses

返回类型

dict

get_learning_curves(lc_extractor=<function extract_HBS_learning_curves>, config_ids=None)[源代码]

extracts all learning curves from all run configurations

参数
  • lc_extractor (callable) – a function to return a list of learning_curves. defaults to hpbanster.HB_result.extract_HP_learning_curves

  • config_ids (list of valid config ids) – if only a subset of the config ids is wanted

返回

a dictionary with the config_ids as keys and the learning curves as values

返回类型

dict

get_pandas_dataframe(budgets=None, loss_fn=<function Result.<lambda>>)[源代码]
get_runs_by_id(config_id)[源代码]

returns a list of runs for a given config id

The runs are sorted by ascending budget, so ‘-1’ will give the longest run for this config.

num_iterations()[源代码]
class ultraopt.result.Run(config_id, budget, loss, info, timestamps, error_logs)[源代码]

基类:object

Not a proper class, more a ‘struct’ to bundle important information about a particular run

ultraopt.result.extract_HBS_learning_curves(runs)[源代码]

function to get the hyperband learning curves

This is an example function showing the interface to use the HB_result.get_learning_curves method.

参数

runs (list of HB_result.run objects) – the performed runs for an unspecified config

返回

list of learning curves – An individual learning curve is a list of (t, x_t) tuples. This function must return a list of these. One could think of cases where one could extract multiple learning curves from these runs, e.g. if each run is an independent training run of a neural network on the data.

返回类型

list of lists of tuples

ultraopt.result.logged_results_to_HBS_result(directory)[源代码]

function to import logged ‘live-results’ and return a HB_result object

You can load live run results with this function and the returned HB_result object gives you access to the results the same way a finished run would.

参数

directory (str) – the directory containing the results.json and config.json files

返回

ultraopt.async_comm.result.Result – TODO

返回类型

object

ultraopt.structure module

class ultraopt.structure.Datum(config, config_info, results=None, timestamps=None, exceptions=None, status='QUEUED', budget=0)[源代码]

基类:object

class ultraopt.structure.Job(id, **kwargs)[源代码]

基类:object

time_it(which_time)[源代码]

ultraopt.viz module

ultraopt.viz.plot_convergence(x, y1, y2, xlabel='Number of iterations $n$', ylabel='$\\min f(x)$ after $n$ iterations', ax=None, name=None, alpha=0.2, yscale=None, color=None, true_minimum=None, **kwargs)[源代码]

Plot one or several convergence traces.

参数
  • args[i] (OptimizeResult, list of OptimizeResult, or tuple) –

    The result(s) for which to plot the convergence trace.

    • if OptimizeResult, then draw the corresponding single trace;

    • if list of OptimizeResult, then draw the corresponding convergence traces in transparency, along with the average convergence trace;

    • if tuple, then args[i][0] should be a string label and args[i][1] an OptimizeResult or a list of OptimizeResult.

  • ax (Axes, optional) – The matplotlib axes on which to draw the plot, or None to create a new one.

  • true_minimum (float, optional) – The true minimum value of the function, if known.

  • yscale (None or string, optional) – The scale for the y-axis.

返回

ax – The matplotlib axes.

返回类型

Axes

Module contents