Selection Policies¶
SelectionPolicy
¶
Bases: ABC
Base class for selection policies that choose the best combination.
Selection policies define the strategy for choosing the winning combination from a set of scored candidates. Different policies implement different trade-offs between competing objectives (e.g., weighted sum vs. Pareto).
This is a key component of the Generate-and-Test workflow where the SearchAlgorithm generates candidates, the ObjectiveSet scores them, and the SelectionPolicy picks the winner.
Examples:
>>> # See WeightedSumPolicy and ParetoUtopiaPolicy for concrete examples
>>> class SimpleMinPolicy(SelectionPolicy):
... def select_best(self, evaluations_df: pd.DataFrame, objective_set: ObjectiveSet):
... # Just pick the row with minimum of first objective
... first_obj = list(objective_set.component_meta().keys())[0]
... best_row = evaluations_df.loc[evaluations_df[first_obj].idxmin()]
... return tuple(best_row['slices'])
Source code in energy_repset/selection_policies/policy.py
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 | |
select_best
abstractmethod
¶
select_best(evaluations_df: DataFrame, objective_set: ObjectiveSet) -> tuple[Hashable, ...]
Select the best combination from scored candidates.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
evaluations_df
|
DataFrame
|
DataFrame where each row is a candidate combination with columns 'slices' (the combination tuple) and score columns for each objective component. |
required |
objective_set
|
ObjectiveSet
|
Provides metadata about score components (direction, weights, etc.) needed for selection logic. |
required |
Returns:
| Type | Description |
|---|---|
tuple[Hashable, ...]
|
Tuple of slice identifiers representing the winning combination. |
Source code in energy_repset/selection_policies/policy.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 | |
PolicyOutcome
dataclass
¶
Source code in energy_repset/selection_policies/policy.py
15 16 17 18 19 | |
WeightedSumPolicy
¶
Bases: SelectionPolicy
Selects the combination minimizing a weighted sum of objectives.
Combines multiple objectives into a single scalar score using weighted averaging. Objectives are oriented for minimization (max objectives are negated), optionally normalized, then combined using weights from the ObjectiveSet (which can be overridden).
This is the simplest multi-objective selection strategy and works well when relative importance of objectives is known.
Examples:
>>> from energy_repset import ObjectiveSet, ObjectiveSpec
>>> from energy_repset.score_components import WassersteinFidelity, CorrelationFidelity
>>> # Default: use weights from ObjectiveSet
>>> policy = WeightedSumPolicy()
>>> objectives = ObjectiveSet([
... ObjectiveSpec('wasserstein', WassersteinFidelity(), weight=1.0),
... ObjectiveSpec('correlation', CorrelationFidelity(), weight=0.5)
... ])
>>> # Final score = 1.0*wasserstein + 0.5*correlation
>>> # Override weights in policy
>>> policy = WeightedSumPolicy(
... overrides={'wasserstein': 2.0, 'correlation': 1.0}
... )
>>> # Final score = 2.0*wasserstein + 1.0*correlation
>>> # With normalization to make objectives comparable
>>> policy = WeightedSumPolicy(
... normalization='robust_minmax', # Scale to [0, 1] using 5th-95th percentiles
... tie_breakers=('wasserstein',), # Break ties by wasserstein
... tie_dirs=('min',)
... )
Source code in energy_repset/selection_policies/weighted_sum.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 | |
__init__
¶
__init__(overrides: dict[str, float] | None = None, normalization: Normalization = 'none', tie_breakers: tuple[str, ...] = (), tie_dirs: tuple[ScoreComponentDirection, ...] = ()) -> None
Initialize weighted sum policy.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
overrides
|
dict[str, float] | None
|
Optional dict mapping objective names to weights, overriding weights from ObjectiveSet. |
None
|
normalization
|
Normalization
|
How to normalize objectives before weighting: - "none": No normalization - "robust_minmax": Scale to [0, 1] using 5th-95th percentiles - "zscore_iqr": Z-score using median and IQR |
'none'
|
tie_breakers
|
tuple[str, ...]
|
Tuple of objective names to use for tie-breaking. |
()
|
tie_dirs
|
tuple[ScoreComponentDirection, ...]
|
Corresponding directions ("min" or "max") for tie-breakers. |
()
|
Source code in energy_repset/selection_policies/weighted_sum.py
49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 | |
select_best
¶
select_best(evaluations_df: DataFrame, objective_set: ObjectiveSet) -> tuple[Hashable, ...]
Select combination with minimum weighted sum score.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
evaluations_df
|
DataFrame
|
DataFrame with 'slices' column and objective scores. |
required |
objective_set
|
ObjectiveSet
|
Provides component metadata (direction, weights). |
required |
Returns:
| Type | Description |
|---|---|
tuple[Hashable, ...]
|
Tuple of slice identifiers with the lowest weighted sum score. |
Source code in energy_repset/selection_policies/weighted_sum.py
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 | |
ParetoMaxMinStrategy
¶
Bases: ParetoUtopiaPolicy
Source code in energy_repset/selection_policies/pareto.py
148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 | |
select_best
¶
select_best(evaluations_df: DataFrame, objective_set: ObjectiveSet) -> tuple[Hashable, ...]
Select best solution using Pareto max-min approach.
Source code in energy_repset/selection_policies/pareto.py
150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 | |
ParetoUtopiaPolicy
¶
Bases: SelectionPolicy
Source code in energy_repset/selection_policies/pareto.py
27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 | |
select_best
¶
select_best(evaluations_df: DataFrame, objective_set: ObjectiveSet) -> tuple[Hashable, ...]
Select best solution using Pareto utopia approach.
Source code in energy_repset/selection_policies/pareto.py
48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 | |
ParetoOutcome
dataclass
¶
Bases: PolicyOutcome
Source code in energy_repset/selection_policies/pareto.py
19 20 21 22 23 24 | |