Evolutionary Algorithms for Multi-Objective Optimization¶
An amateur/beginner project by David Kwan, for lifelong learning. My amateur attempt could contain mistakes, but it represents my genuine effort to understand complex optimization concepts.
Disclaimer: This is entirely my own individual work, developed through personal research and supported by AI-driven prompt engineering, without cloning from other GitHub repositories.
Credit: I sincerely appreciate the `Optuna` and `OptunaHub` developers for making these powerful optimization frameworks available open-source. Special thanks to 夏目 大彰 / Hiroaki Natsume for his excellent introduction to MOEA/D for beginners on Medium, which greatly aided my understanding of the algorithm's implementation.
© david-kwan.com 2025
More detailed outputs are at the end of the code.
Project Overview: Evolutionary Optimization Framework¶
This project implements a comparative framework for evaluating and visualizing the performance of state-of-the-art multi-objective evolutionary algorithms (MOEAs) on benchmark problems. The system demonstrates the relative effectiveness of different optimization approaches in both bi-objective (ZDT1) and tri-objective (DTLZ1) problem spaces, providing insights into their strengths and weaknesses in terms of solution quality, diversity, and computational efficiency.
Core Technical Domains¶
- Evolutionary Computation: Implementation of advanced evolutionary algorithms for optimization.
- Multi-Objective Optimization: Discovery and comparison of Pareto-optimal solutions.
- Benchmark Problems: Testing on standard functions like
ZDT1(2 objectives) andDTLZ1(3 objectives). - Data Visualization: Interactive and static visualizations of Pareto fronts and solution distributions.
- Performance Analysis: Quantitative metrics such as hypervolume indicators and statistical analysis.
Technical Implementation & Architecture¶
Core Optimization Framework¶
- Implemented multiple evolutionary algorithms:
- NSGA-II (Non-dominated Sorting Genetic Algorithm II): A widely-used MOEA that employs non-dominated sorting and crowding distance to maintain diversity in solutions. It excels in bi-objective problems.
- NSGA-III (Reference-point based NSGA): An extension of NSGA-II designed for many-objective optimization, using reference points to ensure diversity across objectives. It is particularly effective for problems with 3 or more objectives.
- MOEA/D (Multi-objective Evolutionary Algorithm based on Decomposition): Decomposes a multi-objective problem into single-objective subproblems, optimizing them simultaneously. It often produces a high number of Pareto solutions with good distribution. In this implementation, the scalar aggregation function is set to
scalar_aggregation_func="tchebycheff", which uses the Tchebycheff (or Chebyshev) method to convert multi-objective problems into a single-objective optimization by minimizing the maximum weighted deviation from reference points. This approach helps balance the trade-offs between objectives effectively. - Random Sampling (Baseline Comparison): A simple random search method used as a baseline to compare the effectiveness of evolutionary algorithms.
- Integration with
OptunaandOptunaHubfor efficient optimization and sampler implementation. - Comprehensive performance tracking with execution time and solution quality metrics.
Benchmark Problems & Test Functions¶
ZDT1: A classic bi-objective benchmark with a convex Pareto front, testing convergence and diversity in 2D objective space.DTLZ1: A tri-objective benchmark with a planar Pareto-optimal surface, challenging algorithms to balance three objectives simultaneously.- Utilized
Optuna's built-in crossover mechanisms (e.g.,BLXAlphaCrossover) to explore genetic operators.
Performance Metrics & Analysis¶
- Hypervolume indicator calculation using
PyMOOto quantify the quality and coverage of Pareto fronts. - Statistical analysis of Pareto solution distribution across objectives (min, mean, median, max, std).
- Execution time performance benchmarking to assess computational efficiency.
- Solution diversity and convergence metrics through visualizations like parallel coordinates and box plots.
Experimental Results & Findings¶
ZDT1 (2 Objectives): Bi-Objective Optimization¶
- MOEA/D: Achieved the highest number of Pareto-optimal solutions (135), demonstrating superior exploration and diversity.
- NSGA-II: Found (77) Pareto-optimal solutions, showing strong performance with good balance between convergence and diversity.
- NSGA-III: Identified (60) Pareto-optimal solutions, slightly underperforming compared to NSGA-II in this bi-objective scenario.
- RandomSampler: Discovered only (25) Pareto-optimal solutions, serving as a baseline with limited effectiveness.
Performance Metrics for ZDT1¶
- Hypervolume Comparison (higher is better, indicating better coverage of the objective space):
- MOEA/D: (5.8627) (best performance, reflecting excellent solution spread and convergence).
- NSGA-II: (5.6489) (close to MOEA/D, indicating strong performance for bi-objective problems).
- NSGA-III: (5.5868) (slightly lower, as it is optimized for many-objective problems).
- RandomSampler: (3.4966) (significantly lower, highlighting poor solution quality and coverage).
- Execution Time (in seconds, reflecting computational cost):
- MOEA/D: (24.9855) (highest due to complex decomposition and neighbor-based updates).
- NSGA-III: (24.7964) (comparable to MOEA/D, reflecting similar population dynamics).
- NSGA-II: (24.5672) (slightly faster, benefiting from simpler diversity mechanisms in 2D space).
- RandomSampler: (9.4826) (fastest due to lack of evolutionary operations, but at the cost of solution quality).
DTLZ1 (3 Objectives): Tri-Objective Optimization¶
- MOEA/D: Dominated with (691) Pareto-optimal solutions, showcasing exceptional ability to handle tri-objective problems.
- NSGA-II: Found (119) Pareto-optimal solutions, performing well but less effectively than MOEA/D.
- NSGA-III: Identified (79) Pareto-optimal solutions, underperforming in solution count compared to NSGA-II despite being designed for many-objective optimization.
- RandomSampler: Discovered only (55) Pareto-optimal solutions, again showing limited capability.
Performance Metrics for DTLZ1¶
- Hypervolume Comparison (higher is better, reflecting the volume dominated by the Pareto front):
- MOEA/D: (4.5644e+07) (best, tied with NSGA-II, indicating excellent coverage in 3D space).
- NSGA-II: (4.5644e+07) (equal to MOEA/D, showing robust performance despite fewer solutions).
- NSGA-III: (4.5643e+07) (very close to others, suggesting good quality despite fewer solutions).
- RandomSampler: (4.5542e+07) (significantly lower, reflecting poor solution distribution in tri-objective space).
- Execution Time (in seconds, indicating computational overhead):
- MOEA/D: (19.2840) (highest due to decomposition and large solution set management).
- NSGA-III: (18.2472) (slightly faster, benefiting from reference-point mechanisms).
- NSGA-II: (18.2300) (comparable to NSGA-III, showing efficiency in 3D space).
- RandomSampler: (3.9060) (fastest, lacking complex evolutionary computations).
Key Performance Insights¶
- MOEA/D's Superiority: Consistently outperforms other algorithms in Pareto solution discovery (135 for
ZDT1, 691 forDTLZ1), likely due to its decomposition approach, which effectively explores the objective space by solving subproblems. - NSGA-II's Robustness: Shows strong performance in both bi- and tri-objective problems (77 for
ZDT1, 119 forDTLZ1), balancing convergence and diversity well, especially inZDT1. - NSGA-III's Specialization: Performs better in tri-objective scenarios but surprisingly finds fewer solutions than NSGA-II in
DTLZ1(79 vs. 119), possibly due to its focus on reference-point diversity over solution count. - Random Sampling's Limitations: Significantly underperforms across both problems, with low solution counts (25 for
ZDT1, 55 forDTLZ1) and poor hypervolume scores, highlighting the necessity of evolutionary mechanisms for effective multi-objective optimization. - Trade-off in Execution Time: Evolutionary algorithms (MOEA/D, NSGA-II, NSGA-III) require significantly more computational time than RandomSampler, reflecting the cost of achieving higher-quality solutions through genetic operations and diversity maintenance.
Key Learnings as a Beginner¶
As a beginner in evolutionary optimization, this project has provided me with foundational insights into multi-objective optimization concepts and practical implementation. Below are some key takeaways:
- Understanding Pareto Optimality: Learned the concept of Pareto fronts, where solutions represent trade-offs between conflicting objectives, and no single solution is universally optimal.
- Population Size Setting: Set population size to (100), balancing exploration (diversity of solutions) and computational cost. A larger population could improve diversity but increase runtime.
- Crossover and Mutation: Utilized
Optuna'sBLXAlphaCrossoverfor genetic operations, understanding that crossover blends parent solutions to create offspring, while mutation introduces random variations to prevent stagnation. These operators drive the evolutionary process. - Number of Trials: Configured (10,000) trials per algorithm to ensure sufficient exploration of the solution space, learning that more trials can lead to better convergence but require more time.
- Algorithm Differences: Gained insight into how NSGA-II focuses on non-dominated sorting, NSGA-III uses reference points for many-objective problems, and MOEA/D decomposes problems into manageable subproblems, each with unique strengths.
- Hypervolume Importance: Understood hypervolume as a key metric for evaluating Pareto front quality, measuring the dominated space in the objective domain, with higher values indicating better solution sets.
- Visualization Value: Learned to interpret Pareto front plots, parallel coordinates, and box plots, which visually reveal solution distribution, trade-offs, and algorithm performance differences.
- Practical Challenges: Faced challenges like computational cost (e.g., MOEA/D's longer runtime) and the need for proper reference points in hypervolume calculation, teaching me the importance of balancing efficiency and accuracy.
Technical Requirements & Dependencies¶
Python3.9+Optuna3.2.0+OptunaHubNumPy,PandasMatplotlib,PlotlyPyMOO(for hypervolume calculation)tqdm(for progress tracking)Kaleido(for static image generation of plots)
References¶
- Deb, K., Thiele, L., Laumanns, M., & Zitzler, E. (2001). Scalable test problems for evolutionary multi-objective optimization. In Evolutionary Multiobjective Optimization (pp. 105-145). Springer, London.
- Liu, F., Lin, X., Wang, Z., Yao, S., Tong, X., Yuan, M., & Zhang, Q. (2023). Large Language Model for Multi-objective Evolutionary Optimization. arXiv preprint arXiv:2310.12541.
- Natsume, H. (2023). An introduction to MOEA/D and examples of multi-objective optimization comparisons. Optuna Medium Blog. https://medium.com/optuna/an-introduction-to-moea-d-and-examples-of-multi-objective-optimization-comparisons-8630565a4e89
- OptunaHub. (2024). MOEA/D Sampler. https://hub.optuna.org/samplers/MOEA/D/
- PyMOO. (2024). DTLZ Problem Suite. https://pymoo.org/problems/many/dtlz.html
- Zhang, Q., & Li, H. (2007). MOEA/D: A multiobjective evolutionary algorithm based on decomposition. IEEE Transactions on Evolutionary Computation, 11(6), 712-731.
- Zitzler, E., Deb, K., & Thiele, L. (2000). Comparison of multiobjective evolutionary algorithms: Empirical results. Evolutionary Computation, 8(2), 173-195.
# -*- coding: utf-8 -*-
import numpy as np
import optuna
import optunahub
import warnings
import time
import os
import pandas as pd
from tqdm import tqdm
from urllib3.exceptions import NotOpenSSLWarning
from optuna.samplers import NSGAIIISampler
from datetime import datetime
import random
# Suppress urllib3 warnings
warnings.filterwarnings("ignore", category=NotOpenSSLWarning)
# Suppress Optuna's trial progress logs
optuna.logging.set_verbosity(optuna.logging.WARNING)
# --- Try importing IPython ---
try:
from IPython import get_ipython
from IPython.display import display, Image
_IPYTHON_AVAILABLE = True
except ImportError:
_IPYTHON_AVAILABLE = False
# Define a fallback display function if not in Jupyter
def display(df):
print(df)
print("IPython not detected. Plots will be saved as HTML/PNG but not displayed inline.")
# --- Check if running in a Jupyter-like environment ---
def is_jupyter():
if not _IPYTHON_AVAILABLE:
return False
try:
shell = get_ipython().__class__.__name__
if shell == 'ZMQInteractiveShell': # Jupyter notebook or qtconsole
return True
elif shell == 'TerminalInteractiveShell': # IPython terminal
return False # Technically IPython, but not the visual frontend
else:
return False # Other type (?)
except NameError:
return False # Not in IPython
except AttributeError:
return False # get_ipython() returned None
_IS_JUPYTER = is_jupyter()
# --- Kaleido Check ---
_KALEIDO_INSTALLED = False
try:
import kaleido # Check if installed
_KALEIDO_INSTALLED = True
except ImportError:
print("Warning: Kaleido package not found. Static image generation will be skipped.")
print("Install with: pip install kaleido")
# --- Seaborn Check ---
_SEABORN_INSTALLED = False
try:
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib.figure import Figure
_SEABORN_INSTALLED = True
except ImportError:
print("Warning: Seaborn or matplotlib package not found. Falling back to Plotly box plots.")
# --- Global color definitions for algorithms ---
ALGORITHM_COLORS = {
"RandomSampler": "#1F77B4", # Default blue
"NSGAII": "#FF7F0E", # Default orange
"NSGAIII": "#2CA02C", # Default green
"MOEAD": "#FF5500" # Bright orange for MOEAD
}
# --- Objective Functions (Original) ---
def objective_zdt1(trial: optuna.Trial) -> tuple[float, float]:
# ZDT1 - 2 objectives
n_variables = 30
x = np.array([trial.suggest_float(f"x{i}", 0, 1) for i in range(n_variables)])
g = 1 + 9 * np.sum(x[1:]) / (n_variables - 1)
f1 = x[0]
f2 = g * (1 - (f1 / g) ** 0.5)
return f1, f2
def objective_dtlz1(trial: optuna.Trial) -> tuple[float, float, float]:
# DTLZ1 - 3 objectives
# For DTLZ1, we need at least M-1+k variables where M is the number of objectives
# and k is typically set to 5
n_objectives = 3
k = 5
n_variables = n_objectives - 1 + k
# Generate the decision variables
x = np.array([trial.suggest_float(f"x{i}", 0, 1) for i in range(n_variables)])
# Calculate g - for the last k variables
g = 100 * (k + np.sum((x[n_objectives-1:] - 0.5)**2 - np.cos(20 * np.pi * (x[n_objectives-1:] - 0.5))))
# Calculate objectives
f = []
for i in range(n_objectives):
f_i = 0.5 * (1 + g)
for j in range(n_objectives - 1 - i):
f_i *= x[j]
if i > 0:
f_i *= (1 - x[n_objectives - 1 - i])
f.append(f_i)
return tuple(f)
# --- Hypervolume Calculation ---
def calculate_hypervolume(values, reference_point):
"""Calculate hypervolume indicator for a set of points."""
try:
from pymoo.indicators.hv import HV
# Suppress pymoo compilation warning
from pymoo.config import Config
Config.warnings['not_compiled'] = False
indicator = HV(ref_point=reference_point)
if len(values) == 0:
return 0
# Ensure values are numpy array
values_np = np.array(values)
# Check for NaNs or Infs which pymoo HV cannot handle
if np.any(np.isnan(values_np)) or np.any(np.isinf(values_np)):
print("Warning: NaN or Inf found in Pareto values, cannot calculate hypervolume accurately.")
return float('nan')
return indicator.do(values_np)
except ImportError:
print("Could not calculate hypervolume. Install pymoo package with: pip install pymoo")
return float('nan')
except Exception as e:
print(f"Error during hypervolume calculation: {e}")
return float('nan')
# --- Enhanced Visualization Functions ---
# Helper function to create colored algorithm names for titles
def get_colored_algorithm_name(algorithm_name):
"""Returns HTML formatted algorithm name with its distinctive color"""
color = ALGORITHM_COLORS.get(algorithm_name, "#000000") # Default to black if not found
return f'<span style="color:{color}; font-weight:bold">{algorithm_name}</span>'
# Helper function to save/display plots with enhancements
def save_and_display_plotly_fig(fig, html_path, img_path_base, is_3d=False):
"""Saves Plotly figure as HTML and optionally as static image(s) displayed inline."""
try:
# Always save HTML
fig.write_html(html_path)
print(f"Saved interactive plot: {html_path}")
if _IS_JUPYTER and _KALEIDO_INSTALLED:
if is_3d:
# Define different camera angles for 3D plots with descriptive names
camera_views = [
{"eye": dict(x=1.8, y=1.8, z=1.8), "name": "Default View"},
{"eye": dict(x=2.5, y=0.1, z=0.1), "name": "View along X-axis"},
{"eye": dict(x=0.1, y=2.5, z=0.1), "name": "View along Y-axis"},
{"eye": dict(x=0.1, y=0.1, z=2.5), "name": "View along Z-axis (top-down)"},
{"eye": dict(x=-1.5, y=-1.5, z=-1.5), "name": "View from back where axes meet"}
]
# Get the original title
original_title = fig.layout.title.text if fig.layout.title and fig.layout.title.text else "3D Plot"
# Check if the title already contains a view name, and remove it if so
if " - " in original_title:
original_title = original_title.split(" - ")[0]
print(f"Saving and displaying 3D plot from multiple views:")
for i, view_info in enumerate(camera_views):
img_path = f"{img_path_base}_view{i+1}.png"
try:
# Create colored and bold view name
view_name = view_info["name"]
colored_view_name = f'<span style="color:#FF5500; font-weight:bold"> - {view_name}</span>'
# Update both camera and title with colorized view information
view_title = f"{original_title}{colored_view_name}"
fig.update_layout(
scene_camera={"eye": view_info["eye"]},
title=view_title
)
fig.write_image(img_path, width=800, height=600)
print(f" Saved static view {i+1}: {img_path}")
display(Image(filename=img_path))
except Exception as e:
print(f" Error saving/displaying static view {i+1}: {e}")
else:
# Save 2D plot as single image
img_path = f"{img_path_base}.png"
try:
fig.write_image(img_path, width=800, height=600)
print(f"Saved static plot: {img_path}")
print("Displaying static plot:")
display(Image(filename=img_path))
except Exception as e:
print(f"Error saving/displaying static plot: {e}")
elif not _IS_JUPYTER:
print("Not in Jupyter environment, skipping inline image display.")
elif not _KALEIDO_INSTALLED:
print("Kaleido not installed, skipping static image generation.")
except Exception as e:
print(f"An error occurred during plot saving/display: {e}")
# Helper function to save/display matplotlib figures
def save_and_display_mpl_fig(fig, img_path_base):
"""Saves matplotlib figure and displays it inline if in Jupyter"""
# Save the figure
img_path = f"{img_path_base}.png"
try:
fig.savefig(img_path, dpi=300, bbox_inches='tight')
print(f"Saved static plot: {img_path}")
if _IS_JUPYTER:
print("Displaying static plot:")
display(Image(filename=img_path))
# Close the figure to free memory
plt.close(fig)
else:
print("Not in Jupyter environment, skipping inline image display.")
except Exception as e:
print(f"Error saving/displaying matplotlib figure: {e}")
plt.close(fig)
# Using original plot style for individual Pareto front plots
def plot_individual_pareto_fronts(studies, experiment_name, results_dir, n_objectives):
"""Plot individual Pareto fronts with original styling (no special treatment for MOEAD)"""
for sampler_name, study in studies:
if len(study.best_trials) > 0:
print(f"\n{'─'*70}")
print(f" VISUALIZATION: Plotting for {sampler_name}")
print(f"{'─'*70}")
try:
# Use Optuna's default visualization with no modifications
fig = optuna.visualization.plot_pareto_front(study)
# Create title with colored algorithm name
colored_name = get_colored_algorithm_name(sampler_name)
title = f"{experiment_name}: Pareto Front for {colored_name}"
fig.update_layout(title=title)
# Define paths
html_path = os.path.join(results_dir, f"{experiment_name}_{sampler_name}_pareto.html")
img_path_base = os.path.join(results_dir, f"{experiment_name}_{sampler_name}_pareto")
# Save HTML and save/display static image(s)
save_and_display_plotly_fig(fig, html_path, img_path_base, is_3d=(n_objectives == 3))
except Exception as e:
print(f"Error plotting Pareto front for {sampler_name}: {e}")
else:
print(f"\n--- Skipping plot for {sampler_name} (no solutions found) ---")
# Function to create enhanced parallel coordinates plot
def create_parallel_coords_plot(studies, experiment_name, results_dir):
"""Create parallel coordinates plot with distinctive MOEA/D representation"""
print(f"\n{'─'*70}")
print(f" VISUALIZATION: Creating Parallel Coordinates Plot")
print(f"{'─'*70}")
import plotly.express as px
all_data = []
for i, (sampler_name, study) in enumerate(studies):
for trial in study.best_trials:
# Check for NaN/Inf in trial values
if any(v is None or np.isnan(v) or np.isinf(v) for v in trial.values):
continue # Skip trials with invalid values
row = {"Sampler": sampler_name, "SamplerID": i}
for j, val in enumerate(trial.values):
row[f"Objective {j+1}"] = val
all_data.append(row)
if all_data:
df = pd.DataFrame(all_data)
# Create a numeric color index for each sampler
unique_samplers = sorted(df["Sampler"].unique())
sampler_map = {name: i for i, name in enumerate(unique_samplers)}
df["ColorIdx"] = df["Sampler"].map(sampler_map)
# Define custom colors to make MOEAD stand out
colors = []
for name in unique_samplers:
colors.append(ALGORITHM_COLORS.get(name, px.colors.qualitative.Bold[len(colors) % len(px.colors.qualitative.Bold)]))
# Create the parallel coordinates plot with numeric coloring
try:
# Create title with colored algorithm names
title_parts = []
for name in unique_samplers:
title_parts.append(get_colored_algorithm_name(name))
title = f"{experiment_name}: Parallel Coordinates of Pareto-optimal Solutions<br>({', '.join(title_parts)})"
fig_pc = px.parallel_coordinates(
df,
color="ColorIdx", # Use numeric index for coloring
labels={"ColorIdx": "Sampler"}, # Label for the color legend
dimensions=[col for col in df.columns if col.startswith("Objective")],
title=title,
color_continuous_scale=colors, # Use the defined qualitative colors
color_continuous_midpoint=None # Important for discrete coloring
)
# Update colorbar to show sampler names instead of numbers
fig_pc.update_layout(
coloraxis_colorbar=dict(
title="Sampler",
tickvals=list(sampler_map.values()),
ticktext=list(sampler_map.keys())
)
)
# Define paths
html_path_pc = os.path.join(results_dir, f"{experiment_name}_parallel_coords.html")
img_path_base_pc = os.path.join(results_dir, f"{experiment_name}_parallel_coords")
# Save HTML and save/display static image (2D plot)
save_and_display_plotly_fig(fig_pc, html_path_pc, img_path_base_pc, is_3d=False)
except Exception as e:
print(f"Error creating parallel coordinates plot: {e}")
else:
print("--- Skipping parallel coordinates plot (no valid data) ---")
# Fixed function to modify comparison plot
def create_comparison_plot(studies, experiment_name, n_objectives, results_dir):
"""Create and save a comparison plot of Pareto fronts with distinctive MOEA/D markers"""
print(f"\n{'─'*70}")
print(f" VISUALIZATION: Creating Comparison Plot")
print(f"{'─'*70}")
import plotly.graph_objects as go
fig_comp = go.Figure()
has_comparison_data = False
# Generate colored algorithm names for title
colored_names = []
for i, (sampler_name, study) in enumerate(studies):
if len(study.best_trials) > 0:
has_comparison_data = True
values = np.array([t.values for t in study.best_trials])
# Check for NaNs/Infs before plotting
if np.any(np.isnan(values)) or np.any(np.isinf(values)):
print(f"Warning: Skipping {sampler_name} in comparison plot due to NaN/Inf values.")
continue
# Get color for this algorithm
color = ALGORITHM_COLORS.get(sampler_name, None)
# Set marker properties based on sampler
marker_props = {
"size": 6 if sampler_name != "MOEAD" else 10, # Larger for MOEAD
"opacity": 0.7 if sampler_name != "MOEAD" else 0.9, # More opaque for MOEAD
"color": color # Use predefined color
}
# For 3D plots, use valid symbols (no 'star' in 3D)
if n_objectives == 3:
# Use diamond for MOEAD in 3D (as star is not supported)
marker_props["symbol"] = "circle" if sampler_name != "MOEAD" else "diamond"
fig_comp.add_trace(go.Scatter3d(
x=values[:, 0],
y=values[:, 1],
z=values[:, 2],
mode='markers',
name=sampler_name,
marker=marker_props
))
# For 2 objectives, use star for MOEAD
else:
if sampler_name == "MOEAD":
marker_props["symbol"] = "star"
fig_comp.add_trace(go.Scatter(
x=values[:, 0],
y=values[:, 1],
mode='markers',
name=sampler_name,
marker=marker_props
))
colored_names.append(get_colored_algorithm_name(sampler_name))
if has_comparison_data:
# Create title with colored algorithm names
title = f"{experiment_name}: Comparison of Pareto Fronts<br>({', '.join(colored_names)})"
layout_options = {
"title": title,
"legend_title": "Samplers"
}
if n_objectives == 3:
layout_options["scene"] = dict(
xaxis_title="Objective 1",
yaxis_title="Objective 2",
zaxis_title="Objective 3"
)
else:
layout_options["xaxis_title"] = "Objective 1"
layout_options["yaxis_title"] = "Objective 2"
fig_comp.update_layout(**layout_options)
# Define paths
html_path_comp = os.path.join(results_dir, f"{experiment_name}_comparison.html")
img_path_base_comp = os.path.join(results_dir, f"{experiment_name}_comparison")
# Save HTML and save/display static image(s)
save_and_display_plotly_fig(fig_comp, html_path_comp, img_path_base_comp, is_3d=(n_objectives == 3))
else:
print("--- Skipping comparison plot (no data or only data with NaN/Inf) ---")
# Enhanced box plots with seaborn style
def create_box_plots_seaborn(studies, experiment_name, n_objectives, results_dir):
"""Create box plots with seaborn for better aesthetics"""
print(f"\n{'─'*70}")
print(f" VISUALIZATION: Creating Objective Box Plots (Seaborn Style)")
print(f"{'─'*70}")
if _SEABORN_INSTALLED:
# Use seaborn for nicer aesthetics
# Generate colored algorithm names for title
colored_algo_names = []
# Prepare data for all objectives
all_obj_data = []
for obj_idx in range(n_objectives):
obj_data = []
for sampler_name, study in studies:
if sampler_name not in [name for name, _ in colored_algo_names]:
color = ALGORITHM_COLORS.get(sampler_name, "#000000")
colored_algo_names.append((sampler_name, color))
valid_values = [t.values[obj_idx] for t in study.best_trials if t.values is not None and len(t.values) > obj_idx
and t.values[obj_idx] is not None and not np.isnan(t.values[obj_idx]) and not np.isinf(t.values[obj_idx])]
# Create DataFrame with values for this sampler and objective
if valid_values:
df = pd.DataFrame({
'Algorithm': [sampler_name] * len(valid_values),
'Value': valid_values
})
obj_data.append(df)
if obj_data:
# Combine all samplers for this objective
obj_df = pd.concat(obj_data, ignore_index=True)
obj_df['Objective'] = f"Objective {obj_idx+1}"
all_obj_data.append(obj_df)
if all_obj_data:
# Combine all objectives
all_data = pd.concat(all_obj_data, ignore_index=True)
# Set up the plot with proper size
fig_height = 5
fig_width = 6 * n_objectives # Width based on number of objectives
fig, axes = plt.subplots(1, n_objectives, figsize=(fig_width, fig_height), sharey=False)
if n_objectives == 1:
axes = [axes] # Make axes iterable for single objective case
# Set seaborn style
sns.set_style('whitegrid')
# Custom palette using our algorithm colors
palette = {name: color for name, color in colored_algo_names}
# Plot each objective
for obj_idx in range(n_objectives):
obj_name = f"Objective {obj_idx+1}"
obj_data = all_data[all_data['Objective'] == obj_name]
# Enhanced box plot - Fixed to avoid FutureWarning
ax = sns.boxplot(
x='Algorithm',
y='Value',
hue='Algorithm', # Use hue instead of just palette
data=obj_data,
ax=axes[obj_idx],
palette=palette,
width=0.6, # Wider boxes
linewidth=1.5, # Thicker lines
fliersize=5, # Outlier marker size
boxprops={'alpha': 0.8}, # Box transparency
legend=False # No legend since we're using hue
)
# Add points on top of boxes for better distribution view - Fixed to avoid FutureWarning
sns.stripplot(
x='Algorithm',
y='Value',
hue='Algorithm', # Use hue instead of just palette
data=obj_data,
ax=axes[obj_idx],
alpha=0.4,
size=4,
palette=palette,
jitter=True, # Add jitter for better visibility
edgecolor='auto', # Use 'auto' instead of 'gray'
linewidth=0.5,
legend=False # No legend since we're using hue
)
# Emphasize MOEAD boxes if present
if 'MOEAD' in obj_data['Algorithm'].values:
# Find MOEAD boxes and make them stand out
for i, artist in enumerate(ax.artists):
if ax.get_xticklabels()[i].get_text() == 'MOEAD':
artist.set_edgecolor(ALGORITHM_COLORS['MOEAD'])
artist.set_linewidth(2.5)
# Make all other elements of this box more prominent
for j in range(i * 6, (i + 1) * 6):
if j < len(ax.lines):
ax.lines[j].set_color(ALGORITHM_COLORS['MOEAD'])
ax.lines[j].set_linewidth(2.0 if j % 6 < 2 else 1.5)
# Style the subplot
axes[obj_idx].set_title(f"Objective {obj_idx+1}", fontsize=14, fontweight='bold')
axes[obj_idx].set_xlabel('')
if obj_idx == 0:
axes[obj_idx].set_ylabel('Value', fontsize=12)
else:
axes[obj_idx].set_ylabel('')
# Rotate x-axis labels for better fit
plt.setp(ax.get_xticklabels(), rotation=30, ha='right')
# Create title with colored algorithm names
colored_algorithm_names = [get_colored_algorithm_name(name) for name, _ in colored_algo_names]
title = f"{experiment_name}: Distribution of Objective Values"
# Add overall title
fig.suptitle(title, fontsize=16, fontweight='bold')
plt.tight_layout()
plt.subplots_adjust(top=0.9) # Make room for the title
# Define paths and save
img_path_base = os.path.join(results_dir, f"{experiment_name}_objective_boxplots_seaborn")
save_and_display_mpl_fig(fig, img_path_base)
# Also create an HTML version with colored title for consistency
import plotly.graph_objects as go
from plotly.subplots import make_subplots
# Create a plotly version for HTML output with colored title
fig_plotly = make_subplots(rows=1, cols=n_objectives,
subplot_titles=[f"Objective {i+1}" for i in range(n_objectives)])
for obj_idx in range(n_objectives):
obj_name = f"Objective {obj_idx+1}"
obj_data = all_data[all_data['Objective'] == obj_name]
for algorithm in obj_data['Algorithm'].unique():
algorithm_values = obj_data[obj_data['Algorithm'] == algorithm]['Value'].values
# Set color and boxpoints based on algorithm
color = palette.get(algorithm, "#000000")
box_width = 0.6 if algorithm != "MOEAD" else 0.7
line_width = 1.5 if algorithm != "MOEAD" else 2.5
fig_plotly.add_trace(
go.Box(
y=algorithm_values,
name=algorithm,
showlegend=(obj_idx == 0), # Show legend only on first subplot
marker_color=color,
line=dict(width=line_width, color=color),
boxmean=True, # Show mean
boxpoints='outliers', # Show outliers
jitter=0.3, # Add jitter
pointpos=0, # Center points
width=box_width, # Box width
opacity=0.8
),
row=1, col=obj_idx+1
)
# Add colored title
title_html = f"{experiment_name}: Distribution of Objective Values<br>({', '.join(colored_algorithm_names)})"
fig_plotly.update_layout(
title=title_html,
boxmode='group' # Group boxes by algorithm
)
# Define HTML path
html_path = os.path.join(results_dir, f"{experiment_name}_objective_boxplots.html")
fig_plotly.write_html(html_path)
print(f"Saved interactive plot: {html_path}")
else:
print("--- Skipping box plot (no valid data) ---")
else:
# Fallback to plotly box plots
import plotly.graph_objects as go
from plotly.subplots import make_subplots
print("Seaborn not installed. Using Plotly box plots instead.")
fig_box = make_subplots(rows=1, cols=n_objectives,
subplot_titles=[f"Objective {i+1}" for i in range(n_objectives)])
has_box_data = False
# Generate colored algorithm names for title
colored_names = []
for obj_idx in range(n_objectives):
for i, (sampler_name, study) in enumerate(studies):
if len(study.best_trials) > 0:
valid_values = [t.values[obj_idx] for t in study.best_trials if t.values is not None and len(t.values) > obj_idx
and t.values[obj_idx] is not None and not np.isnan(t.values[obj_idx]) and not np.isinf(t.values[obj_idx])]
if valid_values:
has_box_data = True
# Get color for this sampler
color = ALGORITHM_COLORS.get(sampler_name, None)
if obj_idx == 0 and sampler_name not in [name for name, _ in colored_names]:
colored_names.append((sampler_name, get_colored_algorithm_name(sampler_name)))
# Set box properties based on sampler
box_props = {}
if color:
box_props["marker_color"] = color
# Make MOEAD boxes stand out
if sampler_name == "MOEAD":
box_props["line"] = dict(width=2.5, color=color)
box_props["boxmean"] = True # Show mean line for MOEAD
box_props["width"] = 0.7 # Wider boxes for MOEAD
else:
box_props["line"] = dict(width=1.5)
box_props["width"] = 0.6 # Standard width
# Enhanced box plot settings
box_props["boxpoints"] = 'outliers' # Show outliers
box_props["jitter"] = 0.3 # Add jitter
box_props["pointpos"] = 0 # Center points
box_props["opacity"] = 0.8 # Slight transparency
fig_box.add_trace(
go.Box(
y=valid_values,
name=sampler_name,
showlegend=(obj_idx == 0), # Show legend only on first subplot
**box_props
),
row=1, col=obj_idx+1
)
if has_box_data:
# Create title with colored algorithm names
title = f"{experiment_name}: Distribution of Objective Values<br>({', '.join([name for _, name in colored_names])})"
fig_box.update_layout(
title=title,
boxmode='group' # Group boxes by sampler
)
# Define paths
html_path_box = os.path.join(results_dir, f"{experiment_name}_objective_boxplots.html")
img_path_base_box = os.path.join(results_dir, f"{experiment_name}_objective_boxplots")
# Save HTML and save/display static image (2D plot)
save_and_display_plotly_fig(fig_box, html_path_box, img_path_base_box, is_3d=False)
else:
print("--- Skipping box plot (no valid data) ---")
# --- Experiment Runner with Enhanced Organization ---
def run_experiment(objective_func, n_objectives, n_trials, samplers_config, experiment_name):
"""
Runs a multi-objective optimization experiment comparing different samplers.
"""
print(f"\n{'='*80}")
print(f" EXPERIMENT: Running {experiment_name} experiment with {n_objectives} objectives")
print(f"{'='*80}")
directions = ["minimize"] * n_objectives
studies = []
execution_times = {}
# Create results directory
results_dir = f"results_{experiment_name.replace(' ', '_').replace('(', '').replace(')', '')}_{datetime.now().strftime('%Y%m%d_%H%M%S')}"
os.makedirs(results_dir, exist_ok=True)
print(f"Results will be saved in: {results_dir}")
# Run optimization for each sampler with a progress bar
print(f"\n{'─'*70}")
print(f" OPTIMIZATION: Running Optimization for Each Sampler")
print(f"{'─'*70}")
for sampler_name, sampler in samplers_config:
print(f"\n{'·'*50}")
print(f" Optimizing with {sampler_name}")
print(f"{'·'*50}")
# Create study
study = optuna.create_study(
sampler=sampler,
study_name=f"{sampler_name}_{experiment_name}",
directions=directions,
)
# Set up progress bar
progress_bar = tqdm(total=n_trials, desc=f"{sampler_name} Progress")
# Callback to update the progress bar
def update_progress_bar(study, trial):
progress_bar.update(1)
# Run optimization with progress bar and track time
start_time = time.time()
study.optimize(objective_func, n_trials=n_trials, callbacks=[update_progress_bar])
end_time = time.time()
execution_times[sampler_name] = end_time - start_time
progress_bar.close()
# Print some statistics
print(f"Number of Pareto-optimal solutions for {sampler_name}: {len(study.best_trials)}")
# Store study for comparison
studies.append((sampler_name, study))
# Create Pareto solution stats DataFrame
print(f"\n{'─'*70}")
print(f" ANALYSIS: Pareto Solution Statistics")
print(f"{'─'*70}")
pareto_data = [(sampler_name, len(study.best_trials)) for sampler_name, study in studies]
pareto_stats_df = pd.DataFrame(pareto_data, columns=["Sampler", "Pareto Solutions"])
display(pareto_stats_df)
# Visualization
try:
print(f"\n{'─'*70}")
print(f" VISUALIZATION: Generating Visualizations")
print(f"{'─'*70}")
# Plot individual Pareto fronts
plot_individual_pareto_fronts(studies, experiment_name, results_dir, n_objectives)
# Create comparison plot
create_comparison_plot(studies, experiment_name, n_objectives, results_dir)
# Create parallel coordinates plot
create_parallel_coords_plot(studies, experiment_name, results_dir)
# Create seaborn-style box plots
create_box_plots_seaborn(studies, experiment_name, n_objectives, results_dir)
print(f"\n{'─'*70}")
print(f" VISUALIZATION: All Visualizations Completed")
print(f"{'─'*70}")
if _IS_JUPYTER and _KALEIDO_INSTALLED:
print("Static images should be displayed above (if generated).")
print(f"Files saved to '{results_dir}' directory.")
except ImportError as e:
print(f"Visualization error: Required library not found: {e}")
print("Install matplotlib, plotly, pandas, and kaleido to generate visualizations.")
except Exception as e:
print(f"An unexpected error occurred during visualization: {e}")
# Calculate performance metrics
print(f"\n{'─'*70}")
print(f" ANALYSIS: Calculating Performance Metrics")
print(f"{'─'*70}")
metrics = []
summary_data = []
# Define reference point (worst value for each objective + some margin)
ref_point_defined = False
max_values = [1.1] * n_objectives # Default fallback
# Calculate dynamic reference point
all_valid_best_trials = [
trial for _, study in studies
for trial in study.best_trials
if trial.values is not None and not any(v is None or np.isnan(v) or np.isinf(v) for v in trial.values)
]
if all_valid_best_trials:
try:
# Calculate max for each objective based on valid trials
max_values_calc = []
for obj_idx in range(n_objectives):
obj_values = [t.values[obj_idx] for t in all_valid_best_trials]
if not obj_values:
raise ValueError(f"No valid values found for objective {obj_idx+1}")
max_val = max(obj_values)
# Add a relative margin (10%) and a small absolute margin (1e-6)
# to handle zero or very small max values robustly.
margin = abs(max_val * 0.1) + 1e-6
max_values_calc.append(max_val + margin)
max_values = max_values_calc
ref_point_defined = True
print(f"\nCalculated Reference Point for Hypervolume: {max_values}")
except Exception as e:
print(f"\nWarning: Could not calculate dynamic reference point: {e}. Using default: {max_values}")
else:
print(f"\nWarning: No valid Pareto solutions found across studies. Using default reference point: {max_values}")
# Prepare for experiment summary DataFrame
for sampler_name, study in studies:
valid_trials = [
t for t in study.best_trials
if t.values is not None and not any(v is None or np.isnan(v) or np.isinf(v) for v in t.values)
]
n_pareto_valid = len(valid_trials)
summary_row = {
"Sampler": sampler_name,
"Pareto Solutions": n_pareto_valid # Report count of *valid* pareto solutions
}
metric = {
"Sampler": sampler_name,
"Total Pareto Trials Found": len(study.best_trials), # Total found by Optuna
"Valid Pareto Solutions": n_pareto_valid, # Those usable for metrics
"Execution Time (s)": execution_times.get(sampler_name, float('nan'))
}
if n_pareto_valid > 0:
# Calculate statistics for each objective using valid trials
for i in range(n_objectives):
values = [t.values[i] for t in valid_trials]
metric[f"Min f{i+1}"] = min(values)
metric[f"Mean f{i+1}"] = np.mean(values)
metric[f"Median f{i+1}"] = np.median(values)
metric[f"Max f{i+1}"] = max(values)
metric[f"Std f{i+1}"] = np.std(values)
# Add min values to summary
summary_row[f"min f{i+1}"] = min(values)
# Calculate hypervolume using valid trials
pareto_values = [list(t.values) for t in valid_trials]
hypervolume = calculate_hypervolume(pareto_values, max_values)
metric["Hypervolume"] = hypervolume
summary_row["Hypervolume"] = hypervolume
else:
# Add NaN placeholders if no valid solutions
for i in range(n_objectives):
metric[f"Min f{i+1}"] = np.nan
metric[f"Mean f{i+1}"] = np.nan
metric[f"Median f{i+1}"] = np.nan
metric[f"Max f{i+1}"] = np.nan
metric[f"Std f{i+1}"] = np.nan
summary_row[f"min f{i+1}"] = np.nan
# Hypervolume is 0 if no valid points dominate the reference point,
# or NaN if ref point couldn't be defined properly.
metric["Hypervolume"] = 0.0 if ref_point_defined else np.nan
summary_row["Hypervolume"] = 0.0 if ref_point_defined else np.nan
summary_data.append(summary_row)
metrics.append(metric)
# Create experiment summary DataFrame
print(f"\n{'─'*70}")
print(f" ANALYSIS: {experiment_name} Summary")
print(f"{'─'*70}")
summary_df = pd.DataFrame(summary_data)
display(summary_df)
# Create detailed metrics DataFrame
print(f"\n{'─'*70}")
print(f" ANALYSIS: Detailed Performance Metrics")
print(f"{'─'*70}")
metrics_df = pd.DataFrame(metrics)
pd.set_option('display.max_columns', None) # Show all columns
pd.set_option('display.width', 1000) # Wider display
pd.set_option('display.precision', 4) # Show 4 decimal places
display(metrics_df)
# Save metrics to CSV
metrics_csv_path = os.path.join(results_dir, f"{experiment_name}_metrics.csv")
try:
metrics_df.to_csv(metrics_csv_path, index=False)
print(f"Performance metrics saved to {metrics_csv_path}")
except Exception as e:
print(f"Error saving metrics CSV: {e}")
print(f"\n{'='*80}")
print(f" EXPERIMENT: {experiment_name} Completed")
print(f"{'='*80}")
return studies, results_dir # Return studies and the results directory path
# --- Overall Experiment Summary Function with Enhanced Formatting ---
def generate_overall_summary(experiment_results, output_filename=None):
"""Generate an overall summary of all experiments with clear section formatting"""
print(f"\n{'='*80}")
print(f" SUMMARY: Generating Overall Experiment Summary")
print(f"{'='*80}")
overall_data = []
for exp_name, studies_list, results_dir in experiment_results:
metrics_path = os.path.join(results_dir, f"{exp_name}_metrics.csv")
try:
exp_metrics_df = pd.read_csv(metrics_path)
for _, row in exp_metrics_df.iterrows():
n_obj = 2 if exp_name.startswith("ZDT1") else 3
summary_row = {
"Experiment": exp_name,
"Sampler/Config": row["Sampler"],
"Valid Pareto Solutions": row["Valid Pareto Solutions"],
"Hypervolume": row.get("Hypervolume", np.nan)
}
# Add min objective values
for i in range(n_obj):
summary_row[f"Min f{i+1}"] = row.get(f"Min f{i+1}", np.nan)
overall_data.append(summary_row)
except FileNotFoundError:
print(f"Warning: Metrics file not found for {exp_name} at {metrics_path}. Skipping this experiment in summary.")
except Exception as e:
print(f"Warning: Error reading metrics file for {exp_name}: {e}. Skipping this experiment in summary.")
if overall_data:
overall_df = pd.DataFrame(overall_data)
# Display the overall summary
print(f"\n{'─'*70}")
print(f" SUMMARY: Overall Experiment Results")
print(f"{'─'*70}")
pd.set_option('display.max_columns', None)
pd.set_option('display.width', 200)
# Reorder columns for clarity
cols_order = ["Experiment", "Sampler/Config", "Valid Pareto Solutions", "Hypervolume"]
obj_cols = sorted([col for col in overall_df.columns if col.startswith("Min f")])
final_cols = cols_order + obj_cols
display(overall_df[final_cols].round(4)) # Round for display
# Save overall summary
if output_filename is None:
output_filename = f"results_OVERALL_SUMMARY_{datetime.now().strftime('%Y%m%d_%H%M%S')}.csv"
try:
overall_df[final_cols].to_csv(output_filename, index=False)
print(f"\nOverall summary saved to: {output_filename}")
except Exception as e:
print(f"Error saving overall summary CSV: {e}")
else:
print("\nNo data available to generate overall summary.")
return overall_df if overall_data else None
# --- Main Execution Block ---
if __name__ == "__main__":
print(f"\n{'='*80}")
print(f" INITIALIZATION: Multi-Objective Optimization Comparison")
print(f"{'='*80}")
# Load required modules
try:
moead_mod = optunahub.load_module("samplers/moead")
_MOEAD_LOADED = True
print("Successfully loaded MOEAD sampler from OptunaHub.")
except Exception as e:
print(f"Warning: Failed to load MOEAD sampler from OptunaHub: {e}")
print("MOEAD sampler will be skipped.")
_MOEAD_LOADED = False
# Configuration
print(f"\n{'─'*70}")
print(f" CONFIGURATION: Setting Up Experiment Parameters")
print(f"{'─'*70}")
seed = 777
population_size = 100
n_trials = 10000 # Doubled from 5000 to 10000
# Define crossover operator (used by NSGA-II, NSGA-III, MOEAD)
try:
crossover = optuna.samplers.nsgaii.BLXAlphaCrossover()
print("Using experimental BLXAlphaCrossover.")
except AttributeError:
print("Warning: BLXAlphaCrossover not found (likely Optuna version < 3.0). Falling back to default crossover for samplers.")
crossover = None # Samplers will use their defaults
print(f"\nRunning with n_trials = {n_trials}, population_size = {population_size}")
if _IS_JUPYTER:
print("Detected Jupyter environment. Will display static plots inline if Kaleido is installed.")
if not _KALEIDO_INSTALLED:
print("Install Kaleido (`pip install kaleido`) for static image output.")
else:
print("Not in Jupyter environment. Plots will be saved to files.")
# Define samplers for 2-objective problem (ZDT1)
samplers_2obj_config = [
("RandomSampler", optuna.samplers.RandomSampler(seed=seed)),
("NSGAII", optuna.samplers.NSGAIISampler(
seed=seed,
population_size=population_size,
crossover=crossover,
)),
("NSGAIII", NSGAIIISampler(
seed=seed,
population_size=population_size,
crossover=crossover,
)),
]
if _MOEAD_LOADED:
n_neighbors = max(2, population_size // 5) # Ensure n_neighbors >= 2
print(f"Configuring MOEAD with n_neighbors = {n_neighbors}")
samplers_2obj_config.append(
("MOEAD", moead_mod.MOEADSampler(
seed=seed,
population_size=population_size,
n_neighbors=n_neighbors,
scalar_aggregation_func="tchebycheff",
crossover=crossover,
))
)
# Run bi-objective experiment (ZDT1)
zdt1_studies, zdt1_results_dir = run_experiment(
objective_func=objective_zdt1,
n_objectives=2,
n_trials=n_trials,
samplers_config=samplers_2obj_config,
experiment_name="ZDT1 (2 Objectives)"
)
# Define samplers for 3-objective problem (DTLZ1)
samplers_3obj_config = [
("RandomSampler", optuna.samplers.RandomSampler(seed=seed)),
("NSGAII", optuna.samplers.NSGAIISampler(
seed=seed,
population_size=population_size,
crossover=crossover,
)),
("NSGAIII", NSGAIIISampler(
seed=seed,
population_size=population_size,
crossover=crossover,
)),
]
if _MOEAD_LOADED:
n_neighbors = max(2, population_size // 5) # Ensure n_neighbors >= 2
print(f"Configuring MOEAD with n_neighbors = {n_neighbors}")
samplers_3obj_config.append(
("MOEAD", moead_mod.MOEADSampler(
seed=seed,
population_size=population_size,
n_neighbors=n_neighbors,
scalar_aggregation_func="tchebycheff",
crossover=crossover,
))
)
# Run tri-objective experiment (DTLZ1)
dtlz1_studies, dtlz1_results_dir = run_experiment(
objective_func=objective_dtlz1,
n_objectives=3,
n_trials=n_trials,
samplers_config=samplers_3obj_config,
experiment_name="DTLZ1 (3 Objectives)"
)
# Generate overall experiment summary
experiment_results = [
("ZDT1 (2 Objectives)", zdt1_studies, zdt1_results_dir),
("DTLZ1 (3 Objectives)", dtlz1_studies, dtlz1_results_dir)
]
overall_summary = generate_overall_summary(experiment_results)
print(f"\n{'='*80}")
print(f" COMPLETION: All Experiments Completed Successfully")
print(f"{'='*80}")
print("All visualizations and metrics have been saved to their respective results directories:")
for exp, path in {"ZDT1 (2 Objectives)": zdt1_results_dir, "DTLZ1 (3 Objectives)": dtlz1_results_dir}.items():
print(f"- {exp}: {path}")
/Users/777david/Library/Python/3.9/lib/python/site-packages/urllib3/__init__.py:35: NotOpenSSLWarning: urllib3 v2 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with 'LibreSSL 2.8.3'. See: https://github.com/urllib3/urllib3/issues/3020
warnings.warn(
/var/folders/3m/_x0lkjz93nqd_y3r6sw126dr0000gn/T/ipykernel_1463/3350209180.py:999: ExperimentalWarning: BLXAlphaCrossover is experimental (supported from v3.0.0). The interface can change in the future.
crossover = optuna.samplers.nsgaii.BLXAlphaCrossover()
/var/folders/3m/_x0lkjz93nqd_y3r6sw126dr0000gn/T/ipykernel_1463/3350209180.py:1021: ExperimentalWarning: NSGAIIISampler is experimental (supported from v3.2.0). The interface can change in the future.
("NSGAIII", NSGAIIISampler(
================================================================================ INITIALIZATION: Multi-Objective Optimization Comparison ================================================================================ Successfully loaded MOEAD sampler from OptunaHub. ────────────────────────────────────────────────────────────────────── CONFIGURATION: Setting Up Experiment Parameters ────────────────────────────────────────────────────────────────────── Using experimental BLXAlphaCrossover. Running with n_trials = 10000, population_size = 100 Detected Jupyter environment. Will display static plots inline if Kaleido is installed. Configuring MOEAD with n_neighbors = 20 ================================================================================ EXPERIMENT: Running ZDT1 (2 Objectives) experiment with 2 objectives ================================================================================ Results will be saved in: results_ZDT1_2_Objectives_20250428_173919 ────────────────────────────────────────────────────────────────────── OPTIMIZATION: Running Optimization for Each Sampler ────────────────────────────────────────────────────────────────────── ·················································· Optimizing with RandomSampler ··················································
RandomSampler Progress: 100%|███████████| 10000/10000 [00:09<00:00, 1054.55it/s]
Number of Pareto-optimal solutions for RandomSampler: 25 ·················································· Optimizing with NSGAII ··················································
NSGAII Progress: 100%|███████████████████| 10000/10000 [00:24<00:00, 407.05it/s]
Number of Pareto-optimal solutions for NSGAII: 77 ·················································· Optimizing with NSGAIII ··················································
NSGAIII Progress: 100%|██████████████████| 10000/10000 [00:24<00:00, 403.28it/s]
Number of Pareto-optimal solutions for NSGAIII: 60 ·················································· Optimizing with MOEAD ··················································
MOEAD Progress: 100%|████████████████████| 10000/10000 [00:24<00:00, 400.23it/s]
Number of Pareto-optimal solutions for MOEAD: 135 ────────────────────────────────────────────────────────────────────── ANALYSIS: Pareto Solution Statistics ──────────────────────────────────────────────────────────────────────
| Sampler | Pareto Solutions | |
|---|---|---|
| 0 | RandomSampler | 25 |
| 1 | NSGAII | 77 |
| 2 | NSGAIII | 60 |
| 3 | MOEAD | 135 |
────────────────────────────────────────────────────────────────────── VISUALIZATION: Generating Visualizations ────────────────────────────────────────────────────────────────────── ────────────────────────────────────────────────────────────────────── VISUALIZATION: Plotting for RandomSampler ────────────────────────────────────────────────────────────────────── Saved interactive plot: results_ZDT1_2_Objectives_20250428_173919/ZDT1 (2 Objectives)_RandomSampler_pareto.html Saved static plot: results_ZDT1_2_Objectives_20250428_173919/ZDT1 (2 Objectives)_RandomSampler_pareto.png Displaying static plot:
────────────────────────────────────────────────────────────────────── VISUALIZATION: Plotting for NSGAII ────────────────────────────────────────────────────────────────────── Saved interactive plot: results_ZDT1_2_Objectives_20250428_173919/ZDT1 (2 Objectives)_NSGAII_pareto.html Saved static plot: results_ZDT1_2_Objectives_20250428_173919/ZDT1 (2 Objectives)_NSGAII_pareto.png Displaying static plot:
────────────────────────────────────────────────────────────────────── VISUALIZATION: Plotting for NSGAIII ────────────────────────────────────────────────────────────────────── Saved interactive plot: results_ZDT1_2_Objectives_20250428_173919/ZDT1 (2 Objectives)_NSGAIII_pareto.html Saved static plot: results_ZDT1_2_Objectives_20250428_173919/ZDT1 (2 Objectives)_NSGAIII_pareto.png Displaying static plot:
────────────────────────────────────────────────────────────────────── VISUALIZATION: Plotting for MOEAD ────────────────────────────────────────────────────────────────────── Saved interactive plot: results_ZDT1_2_Objectives_20250428_173919/ZDT1 (2 Objectives)_MOEAD_pareto.html Saved static plot: results_ZDT1_2_Objectives_20250428_173919/ZDT1 (2 Objectives)_MOEAD_pareto.png Displaying static plot:
────────────────────────────────────────────────────────────────────── VISUALIZATION: Creating Comparison Plot ────────────────────────────────────────────────────────────────────── Saved interactive plot: results_ZDT1_2_Objectives_20250428_173919/ZDT1 (2 Objectives)_comparison.html Saved static plot: results_ZDT1_2_Objectives_20250428_173919/ZDT1 (2 Objectives)_comparison.png Displaying static plot:
────────────────────────────────────────────────────────────────────── VISUALIZATION: Creating Parallel Coordinates Plot ────────────────────────────────────────────────────────────────────── Saved interactive plot: results_ZDT1_2_Objectives_20250428_173919/ZDT1 (2 Objectives)_parallel_coords.html Saved static plot: results_ZDT1_2_Objectives_20250428_173919/ZDT1 (2 Objectives)_parallel_coords.png Displaying static plot:
────────────────────────────────────────────────────────────────────── VISUALIZATION: Creating Objective Box Plots (Seaborn Style) ────────────────────────────────────────────────────────────────────── Saved static plot: results_ZDT1_2_Objectives_20250428_173919/ZDT1 (2 Objectives)_objective_boxplots_seaborn.png Displaying static plot:
Saved interactive plot: results_ZDT1_2_Objectives_20250428_173919/ZDT1 (2 Objectives)_objective_boxplots.html ────────────────────────────────────────────────────────────────────── VISUALIZATION: All Visualizations Completed ────────────────────────────────────────────────────────────────────── Static images should be displayed above (if generated). Files saved to 'results_ZDT1_2_Objectives_20250428_173919' directory. ────────────────────────────────────────────────────────────────────── ANALYSIS: Calculating Performance Metrics ────────────────────────────────────────────────────────────────────── Calculated Reference Point for Hypervolume: [1.1000009999999998, 5.806083526846202] ────────────────────────────────────────────────────────────────────── ANALYSIS: ZDT1 (2 Objectives) Summary ──────────────────────────────────────────────────────────────────────
| Sampler | Pareto Solutions | min f1 | min f2 | Hypervolume | |
|---|---|---|---|---|---|
| 0 | RandomSampler | 25 | 7.422571e-05 | 1.919617 | 3.496599 |
| 1 | NSGAII | 77 | 2.359262e-07 | 0.270695 | 5.648886 |
| 2 | NSGAIII | 60 | 1.060742e-05 | 0.339271 | 5.586794 |
| 3 | MOEAD | 135 | 4.724886e-05 | 0.173464 | 5.862657 |
────────────────────────────────────────────────────────────────────── ANALYSIS: Detailed Performance Metrics ──────────────────────────────────────────────────────────────────────
| Sampler | Total Pareto Trials Found | Valid Pareto Solutions | Execution Time (s) | Min f1 | Mean f1 | Median f1 | Max f1 | Std f1 | Min f2 | Mean f2 | Median f2 | Max f2 | Std f2 | Hypervolume | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | RandomSampler | 25 | 25 | 9.4826 | 7.4226e-05 | 0.2777 | 0.1972 | 0.9965 | 0.3027 | 1.9196 | 3.3091 | 3.1917 | 5.2783 | 0.8793 | 3.4966 |
| 1 | NSGAII | 77 | 77 | 24.5672 | 2.3593e-07 | 0.3494 | 0.3117 | 0.9984 | 0.3278 | 0.2707 | 1.0618 | 0.8335 | 3.0307 | 0.6538 | 5.6489 |
| 2 | NSGAIII | 60 | 60 | 24.7964 | 1.0607e-05 | 0.3830 | 0.3895 | 1.0000 | 0.2900 | 0.3393 | 0.9703 | 0.8321 | 2.7034 | 0.5113 | 5.5868 |
| 3 | MOEAD | 135 | 135 | 24.9855 | 4.7249e-05 | 0.3094 | 0.2657 | 0.9460 | 0.2579 | 0.1735 | 0.7000 | 0.6677 | 1.6386 | 0.3069 | 5.8627 |
/var/folders/3m/_x0lkjz93nqd_y3r6sw126dr0000gn/T/ipykernel_1463/3350209180.py:1057: ExperimentalWarning: NSGAIIISampler is experimental (supported from v3.2.0). The interface can change in the future.
Performance metrics saved to results_ZDT1_2_Objectives_20250428_173919/ZDT1 (2 Objectives)_metrics.csv ================================================================================ EXPERIMENT: ZDT1 (2 Objectives) Completed ================================================================================ Configuring MOEAD with n_neighbors = 20 ================================================================================ EXPERIMENT: Running DTLZ1 (3 Objectives) experiment with 3 objectives ================================================================================ Results will be saved in: results_DTLZ1_3_Objectives_20250428_174201 ────────────────────────────────────────────────────────────────────── OPTIMIZATION: Running Optimization for Each Sampler ────────────────────────────────────────────────────────────────────── ·················································· Optimizing with RandomSampler ··················································
RandomSampler Progress: 100%|███████████| 10000/10000 [00:03<00:00, 2560.12it/s]
Number of Pareto-optimal solutions for RandomSampler: 55 ·················································· Optimizing with NSGAII ··················································
NSGAII Progress: 100%|███████████████████| 10000/10000 [00:18<00:00, 548.54it/s]
Number of Pareto-optimal solutions for NSGAII: 119 ·················································· Optimizing with NSGAIII ··················································
NSGAIII Progress: 100%|██████████████████| 10000/10000 [00:18<00:00, 548.03it/s]
Number of Pareto-optimal solutions for NSGAIII: 79 ·················································· Optimizing with MOEAD ··················································
MOEAD Progress: 100%|████████████████████| 10000/10000 [00:19<00:00, 518.56it/s]
Number of Pareto-optimal solutions for MOEAD: 691 ────────────────────────────────────────────────────────────────────── ANALYSIS: Pareto Solution Statistics ──────────────────────────────────────────────────────────────────────
| Sampler | Pareto Solutions | |
|---|---|---|
| 0 | RandomSampler | 55 |
| 1 | NSGAII | 119 |
| 2 | NSGAIII | 79 |
| 3 | MOEAD | 691 |
────────────────────────────────────────────────────────────────────── VISUALIZATION: Generating Visualizations ────────────────────────────────────────────────────────────────────── ────────────────────────────────────────────────────────────────────── VISUALIZATION: Plotting for RandomSampler ────────────────────────────────────────────────────────────────────── Saved interactive plot: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_RandomSampler_pareto.html Saving and displaying 3D plot from multiple views: Saved static view 1: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_RandomSampler_pareto_view1.png
Saved static view 2: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_RandomSampler_pareto_view2.png
Saved static view 3: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_RandomSampler_pareto_view3.png
Saved static view 4: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_RandomSampler_pareto_view4.png
Saved static view 5: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_RandomSampler_pareto_view5.png
────────────────────────────────────────────────────────────────────── VISUALIZATION: Plotting for NSGAII ────────────────────────────────────────────────────────────────────── Saved interactive plot: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_NSGAII_pareto.html Saving and displaying 3D plot from multiple views: Saved static view 1: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_NSGAII_pareto_view1.png
Saved static view 2: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_NSGAII_pareto_view2.png
Saved static view 3: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_NSGAII_pareto_view3.png
Saved static view 4: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_NSGAII_pareto_view4.png
Saved static view 5: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_NSGAII_pareto_view5.png
────────────────────────────────────────────────────────────────────── VISUALIZATION: Plotting for NSGAIII ────────────────────────────────────────────────────────────────────── Saved interactive plot: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_NSGAIII_pareto.html Saving and displaying 3D plot from multiple views: Saved static view 1: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_NSGAIII_pareto_view1.png
Saved static view 2: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_NSGAIII_pareto_view2.png
Saved static view 3: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_NSGAIII_pareto_view3.png
Saved static view 4: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_NSGAIII_pareto_view4.png
Saved static view 5: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_NSGAIII_pareto_view5.png
────────────────────────────────────────────────────────────────────── VISUALIZATION: Plotting for MOEAD ────────────────────────────────────────────────────────────────────── Saved interactive plot: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_MOEAD_pareto.html Saving and displaying 3D plot from multiple views: Saved static view 1: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_MOEAD_pareto_view1.png
Saved static view 2: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_MOEAD_pareto_view2.png
Saved static view 3: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_MOEAD_pareto_view3.png
Saved static view 4: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_MOEAD_pareto_view4.png
Saved static view 5: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_MOEAD_pareto_view5.png
────────────────────────────────────────────────────────────────────── VISUALIZATION: Creating Comparison Plot ────────────────────────────────────────────────────────────────────── Saved interactive plot: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_comparison.html Saving and displaying 3D plot from multiple views: Saved static view 1: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_comparison_view1.png
Saved static view 2: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_comparison_view2.png
Saved static view 3: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_comparison_view3.png
Saved static view 4: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_comparison_view4.png
Saved static view 5: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_comparison_view5.png
────────────────────────────────────────────────────────────────────── VISUALIZATION: Creating Parallel Coordinates Plot ────────────────────────────────────────────────────────────────────── Saved interactive plot: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_parallel_coords.html Saved static plot: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_parallel_coords.png Displaying static plot:
────────────────────────────────────────────────────────────────────── VISUALIZATION: Creating Objective Box Plots (Seaborn Style) ────────────────────────────────────────────────────────────────────── Saved static plot: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_objective_boxplots_seaborn.png Displaying static plot:
Saved interactive plot: results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_objective_boxplots.html ────────────────────────────────────────────────────────────────────── VISUALIZATION: All Visualizations Completed ────────────────────────────────────────────────────────────────────── Static images should be displayed above (if generated). Files saved to 'results_DTLZ1_3_Objectives_20250428_174201' directory. ────────────────────────────────────────────────────────────────────── ANALYSIS: Calculating Performance Metrics ────────────────────────────────────────────────────────────────────── Calculated Reference Point for Hypervolume: [387.6201266590279, 398.723956134985, 295.3295772035236] ────────────────────────────────────────────────────────────────────── ANALYSIS: DTLZ1 (3 Objectives) Summary ──────────────────────────────────────────────────────────────────────
| Sampler | Pareto Solutions | min f1 | min f2 | min f3 | Hypervolume | |
|---|---|---|---|---|---|---|
| 0 | RandomSampler | 55 | 3.3249e-04 | 9.2743e-04 | 3.8823e-02 | 4.5542e+07 |
| 1 | NSGAII | 119 | 8.3005e-05 | 1.4516e-18 | 8.4591e-16 | 4.5644e+07 |
| 2 | NSGAIII | 79 | 7.1332e-05 | 7.9194e-21 | 1.2031e-15 | 4.5643e+07 |
| 3 | MOEAD | 691 | 3.9592e-05 | 6.2972e-18 | 8.9717e-17 | 4.5644e+07 |
────────────────────────────────────────────────────────────────────── ANALYSIS: Detailed Performance Metrics ──────────────────────────────────────────────────────────────────────
| Sampler | Total Pareto Trials Found | Valid Pareto Solutions | Execution Time (s) | Min f1 | Mean f1 | Median f1 | Max f1 | Std f1 | Min f2 | Mean f2 | Median f2 | Max f2 | Std f2 | Min f3 | Mean f3 | Median f3 | Max f3 | Std f3 | Hypervolume | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | RandomSampler | 55 | 55 | 3.9060 | 3.3249e-04 | 61.2411 | 23.1314 | 352.3819 | 86.3613 | 9.2743e-04 | 46.7177 | 16.4080 | 362.4763 | 80.3101 | 3.8823e-02 | 25.4984 | 15.4683 | 123.0445 | 30.1006 | 4.5542e+07 |
| 1 | NSGAII | 119 | 119 | 18.2300 | 8.3005e-05 | 2.2174 | 0.0660 | 19.8481 | 4.7216 | 1.4516e-18 | 11.6577 | 0.0066 | 132.9446 | 25.7896 | 8.4591e-16 | 40.5568 | 21.6499 | 268.4814 | 44.9087 | 4.5644e+07 |
| 2 | NSGAIII | 79 | 79 | 18.2472 | 7.1332e-05 | 1.1948 | 0.0273 | 12.1369 | 2.7762 | 7.9194e-21 | 18.6237 | 8.2700 | 165.6149 | 34.1845 | 1.2031e-15 | 30.0389 | 17.0990 | 189.9682 | 37.6700 | 4.5643e+07 |
| 3 | MOEAD | 691 | 691 | 19.2840 | 3.9592e-05 | 0.2813 | 0.2214 | 1.7808 | 0.2836 | 6.2972e-18 | 0.2345 | 0.1907 | 1.0211 | 0.1786 | 8.9717e-17 | 0.3282 | 0.2973 | 1.7045 | 0.2123 | 4.5644e+07 |
Performance metrics saved to results_DTLZ1_3_Objectives_20250428_174201/DTLZ1 (3 Objectives)_metrics.csv ================================================================================ EXPERIMENT: DTLZ1 (3 Objectives) Completed ================================================================================ ================================================================================ SUMMARY: Generating Overall Experiment Summary ================================================================================ ────────────────────────────────────────────────────────────────────── SUMMARY: Overall Experiment Results ──────────────────────────────────────────────────────────────────────
| Experiment | Sampler/Config | Valid Pareto Solutions | Hypervolume | Min f1 | Min f2 | Min f3 | |
|---|---|---|---|---|---|---|---|
| 0 | ZDT1 (2 Objectives) | RandomSampler | 25 | 3.4966e+00 | 0.0001 | 1.9196 | NaN |
| 1 | ZDT1 (2 Objectives) | NSGAII | 77 | 5.6489e+00 | 0.0000 | 0.2707 | NaN |
| 2 | ZDT1 (2 Objectives) | NSGAIII | 60 | 5.5868e+00 | 0.0000 | 0.3393 | NaN |
| 3 | ZDT1 (2 Objectives) | MOEAD | 135 | 5.8627e+00 | 0.0000 | 0.1735 | NaN |
| 4 | DTLZ1 (3 Objectives) | RandomSampler | 55 | 4.5542e+07 | 0.0003 | 0.0009 | 0.0388 |
| 5 | DTLZ1 (3 Objectives) | NSGAII | 119 | 4.5644e+07 | 0.0001 | 0.0000 | 0.0000 |
| 6 | DTLZ1 (3 Objectives) | NSGAIII | 79 | 4.5643e+07 | 0.0001 | 0.0000 | 0.0000 |
| 7 | DTLZ1 (3 Objectives) | MOEAD | 691 | 4.5644e+07 | 0.0000 | 0.0000 | 0.0000 |
Overall summary saved to: results_OVERALL_SUMMARY_20250428_174402.csv ================================================================================ COMPLETION: All Experiments Completed Successfully ================================================================================ All visualizations and metrics have been saved to their respective results directories: - ZDT1 (2 Objectives): results_ZDT1_2_Objectives_20250428_173919 - DTLZ1 (3 Objectives): results_DTLZ1_3_Objectives_20250428_174201