Drift Diffusion Model (DDM) Decision-Making Analysis Pipeline

Self-Taught Exploration of Behavioral Economics through Computational Neuroscience & Psychopathology

(2025, Amateur Implementation with Professional Aspirations)

Disclaimer: This project is completely my own original creation, developed without copying from any GitHub repositories and with the support of Generative AI. Although AI tools were utilized, the achievement of this project stems from my proficiency in prompt engineering and my extensive knowledge in behavioral economics, computational neuroscience, and psychopathology research. Without these inspirations and my self-taught background in cognitive modeling, this work would not have been achievable.

This is an AMATEUR ATTEMPT driven by my personal interests in neuroscience, behavioral economics, and computational psychiatry, greatly influenced by inspiring friends around me. Special thanks to HKU's PANDM LAB of Psychology Department (https://pandmlab.hku.hk/) for their groundbreaking psychopathology work in researching computational psychiatry that benefits people with psychiatric disorders - I found their work absolutely fascinating that drives my motive in this area.

This implementation serves as a foundation for understanding decision-making processes across various contexts, with applications in psychology, marketing, user experience, and behavioral economics.

Theoretical Foundation: Based on the well-established Drift Diffusion Model framework from cognitive psychology and neuroscience research, providing mathematically rigorous simulation of two-alternative forced choice decisions.

© david-kwan.com 2025


A. Core Problem & Business Impact

Understanding Human Decision-Making Processes in Business Contexts
The Hidden Complexity of Consumer Choice Behavior
Critical Business Impact:

  • Evidence Accumulation Modeling: The DDM framework captures how consumers gradually collect and process information over time before making decisions. Understanding this evidence accumulation process reveals why some customers need multiple touchpoints, how product information sequencing affects choices, and when to provide additional details versus when to prompt action.
  • Decision Timing Analysis: Understanding why some customers make quick decisions while others deliberate extensively can inform optimal sales strategies and user interface design.
  • Bias Detection in Consumer Behavior: The DDM framework reveals how initial preferences (starting point bias) significantly influence final choices, with implications for product positioning and marketing strategies.
  • Information Integration Patterns: Evidence accumulation shows how customers weigh and combine different pieces of information (price, features, reviews, brand reputation) over time, enabling optimization of information presentation and timing.
  • Risk Tolerance Modeling: Different boundary separation parameters reveal varying risk tolerances among consumers, enabling targeted approaches for different customer segments.
  • Pressure Sensitivity: The model demonstrates how time pressure (boundary separation) and external influences (drift rate) affect decision quality and speed, crucial for optimizing sales environments and digital interfaces.
  • Choice Architecture Optimization: By understanding the cognitive mechanisms underlying decisions, businesses can design better choice environments that align with natural decision-making processes.

B. Project Impact Summary

  • Comprehensive Decision Modeling Framework:
    • Implements complete DDM simulation with all key parameters: drift rate, boundary separation, non-decision time, starting point bias, and noise variance.
    • Successfully models 8 distinct decision-making scenarios representing different consumer archetypes and business contexts.
  • Advanced Visualization Capabilities:
    • Creates publication-quality interactive visualizations with decision trace plots, reaction time distributions, and comprehensive parameter annotations.
    • Demonstrates sophisticated color-coding and legend systems for clear communication of complex psychological concepts.
  • Robust Statistical Analysis:
    • Processes 2,000 simulated trials per scenario to ensure statistically reliable results and meaningful pattern detection.
    • Provides detailed reaction time statistics (mean, std, min, median, max) and choice proportion analysis for each decision alternative.
  • Business-Relevant Scenario Modeling:
    • Standard Shopper (57.9% Coffee, 42.1% Tea): Baseline unbiased decision-making with balanced preferences.
    • Coffee Lover Bias (80.4% Coffee, 19.6% Tea): Demonstrates impact of strong initial preferences on choice outcomes.
    • Cautious Buyer (High Stakes): Shows how increased caution leads to longer decision times (1.2s average) and potential decision timeouts.
    • Impulsive Buyer (Low Stakes): Fast decisions (0.35s average) with reduced deliberation in low-pressure environments.
  • Psychological Insights Validation:
    • Successfully demonstrates key DDM principles: bias effects, speed-accuracy tradeoffs, and the role of evidence accumulation in decision-making.
    • Validates theoretical predictions with empirical simulation results across diverse parameter combinations.

C. Results Summary (8 Scenario Analysis with 2,000 Trials Each)

Scenario Performance Analysis - Decision-Making Patterns Across Consumer Types

Scenario Decision Pattern Mean RT (Coffee) Mean RT (Tea) Key Insight
Standard Shopper Balanced (57.9% vs 42.1%) 0.521s 0.505s Baseline unbiased decision-making
Coffee Lover (Biased) Strong Bias (80.4% vs 19.6%) 0.421s 0.562s Bias accelerates preferred choices
Cautious Buyer Deliberative (63.1% vs 36.1%) 1.202s 1.259s High stakes increase deliberation time
Impulsive Buyer Quick (55.4% vs 44.6%) 0.349s 0.348s Low stakes enable rapid decisions
Indecisive Shopper Ambiguous (50.7% vs 49.4%) 0.517s 0.521s Zero drift creates random walk behavior
Discount Hunter Promotion-Driven (68.9% vs 31.1%) 0.500s 0.510s Strong evidence shifts preferences
High-Pressure Sale Time-Pressured (53.9% vs 46.1%) 0.320s 0.323s Pressure reduces decision quality
Analysis Paralysis Information Overload (50.9% vs 49.1%) 0.352s 0.354s Noise overwhelms weak evidence

Strategic Decision-Making Insights

  • Bias Amplification Effect: Coffee Lover scenario demonstrates how initial preferences (starting point = 0.75) dramatically shift outcomes while reducing decision time for preferred options.
  • Speed-Accuracy Tradeoff: Cautious buyers take 3.4x longer than impulsive buyers but show more consistent decision patterns, validating the fundamental speed-accuracy tradeoff principle.
  • Pressure Sensitivity: High-pressure scenarios reduce decision time to 0.32s but maintain near-random choice proportions, suggesting compromised decision quality under time constraints.
  • Evidence Sensitivity: Discount Hunter scenario with high drift rate (0.8) shows strong preference shifts despite neutral starting position, demonstrating the power of compelling evidence.

Key Technical Achievements

  • Complete DDM Implementation: All core parameters successfully implemented with proper stochastic noise modeling (σ = 1.0).
  • Advanced Visualization System: Publication-ready plots with Comic Sans MS theming, color-coded decision traces, and comprehensive parameter annotation.
  • Statistical Robustness: 16,000 total simulated decisions across all scenarios with detailed distributional analysis.
  • Comprehensive Scenario Coverage: Successfully models decision-making across diverse psychological and business contexts.
  • Real-Time Simulation: Efficient implementation allowing for rapid scenario testing and parameter exploration.

D. Technical Architecture, Implementation, & Deployment

Mathematical Foundation & Implementation The implementation follows rigorous DDM mathematical principles centered on evidence accumulation:

Core DDM Parameters with Greek Symbols:

  • μ (mu) - Drift Rate: Controls the speed and direction of evidence accumulation. Positive values favor the upper boundary (e.g., "Buy Coffee"), negative values favor the lower boundary (e.g., "Buy Tea"), and zero creates random walk behavior with no directional bias.
  • a (alpha) - Boundary Separation: Determines the amount of evidence required before making a decision. Higher values increase caution and deliberation time but improve accuracy, while lower values enable faster but potentially less reliable decisions.
  • z (zeta) - Starting Point Bias: Sets the initial evidence position between boundaries, representing pre-existing preferences or biases. Values closer to one boundary create bias toward that choice (z > a/2 = upper bias, z < a/2 = lower bias), while centered values (z = a/2) represent unbiased starting conditions.
  • Ter (tau) - Non-Decision Time: Accounts for motor response delays, perception time, and other non-decisional processes. Represents the minimum time required regardless of the decision complexity.
  • σ (sigma) - Noise Scale: Controls the variability in evidence accumulation. Higher noise creates more erratic decision paths and increased reaction time variability, simulating uncertainty and inconsistent information processing.
  • dt (delta-t) - Time Step: The temporal resolution of the simulation process, determining the granularity of evidence accumulation updates (typically dt = 0.001 for high precision).

Evidence Accumulation Process: The fundamental principle underlying DDM is that decisions emerge through gradual accumulation of evidence toward decision boundaries. This process captures how:

  • Information Integration: Each piece of evidence (positive or negative) incrementally moves the decision process toward one choice or another
  • Temporal Dynamics: Decision confidence builds over time as evidence accumulates, explaining why some decisions feel "certain" while others remain uncertain
  • Noise and Variability: Random fluctuations in evidence accumulation account for decision variability even with identical inputs
  • Threshold Crossing: Decisions occur when accumulated evidence reaches a predetermined threshold, modeling the "tipping point" in human choice behavior

Enhanced Simulation Architecture

  • Stochastic Process Modeling: Proper implementation of Wiener process with configurable noise parameters and time step resolution (dt = 0.001).
  • Boundary Detection: Accurate threshold crossing detection with configurable upper and lower boundaries.
  • Parameter Flexibility: Complete control over all DDM parameters:
    • Drift Rate (μ): Evidence accumulation speed and direction (-∞ to +∞)
    • Boundary Separation (a): Decision threshold distance (> 0)
    • Non-Decision Time (Ter): Motor and perceptual delays (≥ 0)
    • Starting Point (z): Initial evidence bias (0 ≤ z ≤ a)
    • Noise Scale (σ): Variability in evidence accumulation (> 0)

Advanced Visualization Framework

  • Multi-Panel Layout: GridSpec-based layout with reaction time histograms and main evidence accumulation plot.
  • Custom Theme Design by David Kwan: Publication-quality charts featuring Comic Sans MS typography, warm earth-tone color palette (#8B4513, #D2691E, #CD853F), and custom background styling (#F5F5DC, #FFF8DC) for enhanced readability and aesthetic appeal.
  • Dynamic Color Generation: HSV color space manipulation for trace differentiation while maintaining decision-based color schemes.
  • Comprehensive Annotation System:
    # Parameter visualization includes:
    - Boundary markers with decision labels
    - Starting point indicators with bias descriptions
    - Drift rate arrows showing evidence direction
    - Non-decision time markers
    - Comprehensive parameter legends
    
  • Statistical Integration: Real-time calculation and display of choice proportions, reaction time distributions, and bias metrics.
  • Chart Settings & Theme by David Kwan: Each DDM visualization includes custom formatting, professional styling, and optimized parameter annotations designed for both technical accuracy and visual communication.

Core Dependencies & Performance

  • Scientific Computing: numpy for mathematical operations and random process generation
  • Data Manipulation: pandas for statistical analysis and results formatting
  • Visualization: matplotlib with advanced styling including gridspec, patches, and colors modules
  • Statistical Analysis: Built-in calculation of descriptive statistics and distributional properties
  • Performance: Optimized for real-time simulation with configurable time resolution and trial counts

Deployment Considerations

  • Computational Efficiency: Vectorized operations where possible, with progress tracking for long simulations
  • Memory Management: Efficient storage of decision traces with optional trace limiting for large-scale simulations
  • Reproducibility: Configurable random seed options for consistent results across runs
  • Scalability: Framework supports parameter sweeps and batch processing for research applications

E. Business Applications & Strategic Insights

Decision-Making Pattern Analysis The framework reveals distinct cognitive patterns with direct business implications:

  • Bias Amplification (Coffee Lover): 80.4% preference demonstrates how initial brand loyalty creates self-reinforcing choice patterns
  • Pressure Sensitivity (High-Pressure Sale): Time pressure reduces decision quality while maintaining speed
  • Deliberation Value (Cautious Buyer): Longer decision time yields more stable choice patterns, validating investment in customer education

Strategic Applications

  • Customer Segmentation: Framework identifies distinct decision-making archetypes enabling targeted marketing strategies
  • Interface Design: Speed-accuracy tradeoffs inform optimal timing for user interface elements
  • Choice Architecture: DDM principles guide design of decision environments that align with natural cognitive processes

F. Future Development & Enhancement Roadmap

Short-Term Enhancements

  • Multi-Alternative Extensions: Expand beyond two-choice scenarios to model complex decision environments
  • Dynamic Parameter Modeling: Implement time-varying drift rates and boundaries to model changing decision contexts
  • Real-Time Data Integration: Connect simulation framework to actual behavioral data for model validation

Advanced Applications

  • A/B Testing Framework: Integrate DDM predictions with experimental design for sophisticated testing strategies
  • Customer Journey Modeling: Apply DDM principles to map and optimize complete customer decision journeys
  • Cross-Cultural Validation: Test DDM parameter variations across different cultural contexts and decision-making styles

G. References & Learning Resources

Educational Video Resources:

  • The Drift Diffusion Model - Computational Psychiatry Course
  • Introduction to Computational Models of Choice - Python Implementation
  • Hierarchical Drift Diffusion Models in Neuroscience
  • Understanding Decision Making Through Mathematical Models

Key Academic References:

Wiecki, T. V., Sofer, I., & Frank, M. J. (2013). HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python. Frontiers in Neuroinformatics, 7, 14.

Ratcliff, R., & McKoon, G. (2008). The Diffusion Decision Model: Theory and Data for Two-Choice Decision Tasks. Neural Computation, 20(4), 873-922.

Krajbich, I., Armel, C., & Rangel, A. (2012). The Attentional Drift-Diffusion Model Extends to Simple Purchasing Decisions. Frontiers in Psychology, 3, 193.

Special Acknowledgment: HKU PANDM LAB Psychology Department (https://pandmlab.hku.hk/) for their inspiring work in computational psychiatry research benefiting individuals with psychiatric disorders.


PROJECT DEMONSTRATION

Eight Decision-Making Scenarios Implemented:

  • Scenario 1: The Standard Shopper - Balanced decision-making (57.9% Coffee, 42.1% Tea) [Baseline: μ=0.3, a=1.0, z=0.5]
  • Scenario 2: The Coffee Lover (Biased) - Strong preference bias (80.4% Coffee, 19.6% Tea) [↑ Starting Point: z=0.75]
  • Scenario 3: The Cautious Buyer (High Stakes) - Deliberative decisions with 0.8% timeouts [↑ Boundary: a=2.0]
  • Scenario 4: The Impulsive Buyer (Low Stakes) - Fast decisions (~0.35s reaction time) [↓ Boundary: a=0.6]
  • Scenario 5: The Indecisive Shopper (Ambiguous) - Near-random choices (50.7% vs 49.4%) [Zero Drift: μ=0.0]
  • Scenario 6: The Discount Hunter - Promotion-driven decisions (68.9% vs 31.1%) [↑ Drift Rate: μ=0.8]
  • Scenario 7: The High-Pressure Sale - Time-pressured quick decisions (~0.32s) [↓↓ Boundary: a=0.5]
  • Scenario 8: Analysis Paralysis - Information overload affecting decision quality [↓ Drift: μ=0.05, ↑ Noise: σ=2.5]

Total Analysis: 16,000 simulated decisions with comprehensive reaction time analysis and choice pattern validation across diverse cognitive and business contexts.

In [3]:
 

Scenario 1: The Standard Shopper

Psychological Context

Represents a typical consumer making a routine purchase decision. They have a slight preference but remain open to either option. This is the baseline scenario against which others are compared.

Parameter Configuration

  • Drift Rate: 0.30 - Moderate positive drift indicates slight preference for coffee
  • Boundary Separation: 1.00 - Standard threshold reflects normal decision caution
  • Starting Point: 0.50 - Neutral starting point shows no initial bias
  • Non-Decision Time: 0.25s - Normal perceptual/motor processing time
  • Noise SD: 1.00 - Standard decision noise level

Expected Behavioral Outcomes

This parameter combination should produce decision patterns characterized by balanced decision-making behavior.

Choice Summary - Scenario 1: The Standard Shopper

Count Percentage (%)
Decision
Buy Coffee 1137 56.850
Buy Tea 863 43.150

Reaction Time Summary (in seconds)

count mean std min 25% median 75% max
Decision
Buy Coffee 1137 0.516 0.212 0.270 0.364 0.450 0.599 1.929
Buy Tea 863 0.509 0.205 0.263 0.363 0.447 0.601 1.619

Extended Statistical Analysis

Detailed Percentile Analysis
10th_percentile 90th_percentile IQR coefficient_of_variation
decision_label
Buy Coffee 0.320 0.806 0.235 0.411
Buy Tea 0.319 0.768 0.238 0.402
Decision Patterns by Speed
reaction_time Fast Medium Slow
decision_label
Buy Coffee 55.700 58.400 56.500
Buy Tea 44.300 41.600 43.500
No description has been provided for this image
No description has been provided for this image

Scenario 2: The Coffee Lover (Biased)

Psychological Context

Someone with a strong prior preference for coffee, perhaps due to habit, taste preference, or caffeine dependency. They start closer to the coffee decision boundary.

Parameter Configuration

  • Drift Rate: 0.30 - Same information quality as standard shopper
  • Boundary Separation: 1.00 - Same decision caution as standard shopper
  • Starting Point: 0.75 - Biased toward coffee choice boundary
  • Non-Decision Time: 0.25s - Same processing time
  • Noise SD: 1.00 - Same noise level

Expected Behavioral Outcomes

This parameter combination should produce decision patterns characterized by systematic bias toward one alternative.

Choice Summary - Scenario 2: The Coffee Lover (Biased)

Count Percentage (%)
Decision
Buy Coffee 1621 81.050
Buy Tea 379 18.950

Reaction Time Summary (in seconds)

count mean std min 25% median 75% max
Decision
Buy Coffee 1621 0.412 0.189 0.256 0.293 0.339 0.461 1.567
Buy Tea 379 0.579 0.201 0.291 0.438 0.522 0.662 1.468

Extended Statistical Analysis

Detailed Percentile Analysis
10th_percentile 90th_percentile IQR coefficient_of_variation
decision_label
Buy Coffee 0.272 0.647 0.168 0.459
Buy Tea 0.373 0.866 0.224 0.347
Decision Patterns by Speed
reaction_time Fast Medium Slow
decision_label
Buy Coffee 99.400 82.200 61.500
Buy Tea 0.600 17.800 38.500
No description has been provided for this image
No description has been provided for this image

Scenario 3: The Cautious Buyer (High Stakes)

Psychological Context

A careful decision-maker who wants to be very sure before committing. This could represent expensive purchases, health-conscious consumers, or someone with decision anxiety.

Parameter Configuration

  • Drift Rate: 0.30 - Same information processing rate
  • Boundary Separation: 2.00 - High threshold requires more evidence
  • Starting Point: 1.00 - Neutral relative to expanded boundaries
  • Non-Decision Time: 0.25s - Same processing time
  • Noise SD: 1.00 - Same noise level

Expected Behavioral Outcomes

This parameter combination should produce decision patterns characterized by slower but more deliberate decisions.

Choice Summary - Scenario 3: The Cautious Buyer (High Stakes)

Count Percentage (%)
Decision
Buy Coffee 1272 63.600
Buy Tea 717 35.850
Timeout 11 0.550

Reaction Time Summary (in seconds)

count mean std min 25% median 75% max
Decision
Buy Coffee 1272 1.223 0.707 0.335 0.705 1.035 1.534 4.160
Buy Tea 717 1.215 0.705 0.345 0.669 1.034 1.551 4.086

Extended Statistical Analysis

Detailed Percentile Analysis
10th_percentile 90th_percentile IQR coefficient_of_variation
decision_label
Buy Coffee 0.526 2.164 0.829 0.578
Buy Tea 0.517 2.267 0.882 0.580
Decision Patterns by Speed
reaction_time Fast Medium Slow
decision_label
Buy Coffee 61.700 66.400 63.700
Buy Tea 38.300 33.600 36.300
No description has been provided for this image
No description has been provided for this image

Scenario 4: The Impulsive Buyer (Low Stakes)

Psychological Context

Quick decision-maker who doesn't deliberate much. Could represent low-cost purchases, time pressure, or personality-driven impulsivity.

Parameter Configuration

  • Drift Rate: 0.30 - Same information quality
  • Boundary Separation: 0.60 - Low threshold enables quick decisions
  • Starting Point: 0.30 - Neutral relative to compressed boundaries
  • Non-Decision Time: 0.25s - Same processing time
  • Noise SD: 1.00 - Same noise level

Expected Behavioral Outcomes

This parameter combination should produce decision patterns characterized by rapid, potentially impulsive choices.

Choice Summary - Scenario 4: The Impulsive Buyer (Low Stakes)

Count Percentage (%)
Decision
Buy Coffee 1122 56.100
Buy Tea 878 43.900

Reaction Time Summary (in seconds)

count mean std min 25% median 75% max
Decision
Buy Coffee 1122 0.351 0.081 0.257 0.293 0.329 0.384 0.804
Buy Tea 878 0.349 0.079 0.258 0.291 0.326 0.380 0.922

Extended Statistical Analysis

Detailed Percentile Analysis
10th_percentile 90th_percentile IQR coefficient_of_variation
decision_label
Buy Coffee 0.276 0.457 0.091 0.230
Buy Tea 0.276 0.454 0.089 0.227
Decision Patterns by Speed
reaction_time Fast Medium Slow
decision_label
Buy Coffee 56.200 54.700 57.400
Buy Tea 43.800 45.300 42.600
No description has been provided for this image
No description has been provided for this image

Scenario 5: The Indecisive Shopper (Ambiguous)

Psychological Context

Faces genuinely ambiguous options with no clear preference. Both choices seem equally attractive, leading to decision difficulty and potential timeouts.

Parameter Configuration

  • Drift Rate: 0.00 - Zero drift - no systematic preference
  • Boundary Separation: 1.00 - Standard decision threshold
  • Starting Point: 0.50 - Perfectly neutral starting point
  • Non-Decision Time: 0.25s - Same processing time
  • Noise SD: 1.00 - Same noise level - decision driven by random fluctuations

Expected Behavioral Outcomes

This parameter combination should produce decision patterns characterized by high uncertainty and potential timeouts.

Choice Summary - Scenario 5: The Indecisive Shopper (Ambiguous)

Count Percentage (%)
Decision
Buy Coffee 1024 51.200
Buy Tea 976 48.800

Reaction Time Summary (in seconds)

count mean std min 25% median 75% max
Decision
Buy Coffee 1024 0.519 0.225 0.274 0.360 0.450 0.601 1.679
Buy Tea 976 0.519 0.225 0.267 0.362 0.445 0.610 1.926

Extended Statistical Analysis

Detailed Percentile Analysis
10th_percentile 90th_percentile IQR coefficient_of_variation
decision_label
Buy Coffee 0.321 0.799 0.241 0.433
Buy Tea 0.318 0.820 0.248 0.433
Decision Patterns by Speed
reaction_time Fast Medium Slow
decision_label
Buy Coffee 52.200 50.400 51.100
Buy Tea 47.800 49.600 48.900
No description has been provided for this image
No description has been provided for this image

Scenario 6: The Discount Hunter (Promotion on Coffee)

Psychological Context

Strong external incentive (discount/promotion) creates clear preference. Represents situation where economic factors override personal preferences.

Parameter Configuration

  • Drift Rate: 0.80 - High positive drift due to promotional advantage
  • Boundary Separation: 1.00 - Standard decision threshold
  • Starting Point: 0.50 - Neutral starting point despite strong drift
  • Non-Decision Time: 0.25s - Same processing time
  • Noise SD: 1.00 - Same noise level

Expected Behavioral Outcomes

This parameter combination should produce decision patterns characterized by strong preference for one option.

Choice Summary - Scenario 6: The Discount Hunter (Promotion on Coffee)

Count Percentage (%)
Decision
Buy Coffee 1411 70.550
Buy Tea 589 29.450

Reaction Time Summary (in seconds)

count mean std min 25% median 75% max
Decision
Buy Coffee 1411 0.502 0.208 0.272 0.358 0.440 0.578 1.871
Buy Tea 589 0.496 0.205 0.271 0.352 0.436 0.571 1.520

Extended Statistical Analysis

Detailed Percentile Analysis
10th_percentile 90th_percentile IQR coefficient_of_variation
decision_label
Buy Coffee 0.316 0.777 0.220 0.414
Buy Tea 0.311 0.754 0.219 0.413
Decision Patterns by Speed
reaction_time Fast Medium Slow
decision_label
Buy Coffee 70.600 71.100 70.000
Buy Tea 29.400 28.900 30.000
No description has been provided for this image
No description has been provided for this image

Scenario 7: The High-Pressure Sale (Limited-Time Offer)

Psychological Context

Time pressure or sales tactics force quick decisions with reduced deliberation. The urgency lowers decision thresholds.

Parameter Configuration

  • Drift Rate: 0.30 - Standard preference strength
  • Boundary Separation: 0.50 - Very low threshold due to time pressure
  • Starting Point: 0.25 - Neutral relative to compressed boundaries
  • Non-Decision Time: 0.25s - Same processing time
  • Noise SD: 1.00 - Same noise level

Expected Behavioral Outcomes

This parameter combination should produce decision patterns characterized by rapid, potentially impulsive choices.

Choice Summary - Scenario 7: The High-Pressure Sale (Limited-Time Offer)

Count Percentage (%)
Decision
Buy Coffee 1063 53.150
Buy Tea 937 46.850

Reaction Time Summary (in seconds)

count mean std min 25% median 75% max
Decision
Buy Coffee 1063 0.321 0.058 0.256 0.279 0.303 0.346 0.767
Buy Tea 937 0.324 0.061 0.256 0.280 0.306 0.347 0.642

Extended Statistical Analysis

Detailed Percentile Analysis
10th_percentile 90th_percentile IQR coefficient_of_variation
decision_label
Buy Coffee 0.268 0.402 0.067 0.180
Buy Tea 0.269 0.404 0.067 0.188
Decision Patterns by Speed
reaction_time Fast Medium Slow
decision_label
Buy Coffee 54.000 54.100 51.400
Buy Tea 46.000 45.900 48.600
No description has been provided for this image
No description has been provided for this image

Scenario 8: Analysis Paralysis (Conflicting Information)

Psychological Context

Overwhelmed by conflicting information and multiple factors to consider. High noise represents internal conflict and uncertainty about decision criteria.

Parameter Configuration

  • Drift Rate: 0.05 - Very weak preference due to conflicting signals
  • Boundary Separation: 1.50 - Elevated threshold seeking more certainty
  • Starting Point: 0.75 - Slight bias toward coffee despite confusion
  • Non-Decision Time: 0.25s - Same processing time
  • Noise SD: 2.50 - High noise represents internal conflict and uncertainty

Expected Behavioral Outcomes

This parameter combination should produce decision patterns characterized by high uncertainty and potential timeouts.

Choice Summary - Scenario 8: Analysis Paralysis (Conflicting Information)

Count Percentage (%)
Decision
Buy Tea 1011 50.550
Buy Coffee 989 49.450

Reaction Time Summary (in seconds)

count mean std min 25% median 75% max
Decision
Buy Coffee 989 0.350 0.083 0.259 0.293 0.327 0.379 0.808
Buy Tea 1011 0.348 0.080 0.256 0.292 0.325 0.377 0.895

Extended Statistical Analysis

Detailed Percentile Analysis
10th_percentile 90th_percentile IQR coefficient_of_variation
decision_label
Buy Coffee 0.277 0.446 0.086 0.237
Buy Tea 0.275 0.449 0.085 0.230
Decision Patterns by Speed
reaction_time Fast Medium Slow
decision_label
Buy Coffee 49.300 49.100 50.000
Buy Tea 50.700 50.900 50.000
No description has been provided for this image
No description has been provided for this image

Scenario 9: The Queue Effect (Dynamic Boundaries)

Psychological Context

Represents decision-making when external pressure increases over time (e.g., long queue, time constraints). Both decision boundaries become easier to reach as impatience grows - the coffee choice requires less evidence (upper boundary moves down) and the tea choice also becomes easier (lower boundary moves up), modeling the psychological pressure to make ANY decision quickly.

Parameter Configuration

  • Drift Rate: 0.30 - Standard preference strength
  • Boundary Separation: 1.00 - Initial threshold before pressure effects
  • Starting Point: 0.50 - Neutral starting point
  • Non-Decision Time: 0.25s - Same processing time
  • Noise SD: 1.00 - Same noise level

Expected Behavioral Outcomes

This parameter combination should produce decision patterns characterized by balanced decision-making behavior.

Choice Summary - Scenario 9: The Queue Effect (Dynamic Boundaries)

Count Percentage (%)
Decision
Buy Coffee 1163 58.150
Buy Tea 837 41.850

Reaction Time Summary (in seconds)

count mean std min 25% median 75% max
Decision
Buy Coffee 1163 0.510 0.194 0.274 0.365 0.452 0.588 1.360
Buy Tea 837 0.507 0.202 0.263 0.359 0.449 0.585 1.528

Extended Statistical Analysis

Detailed Percentile Analysis
10th_percentile 90th_percentile IQR coefficient_of_variation
decision_label
Buy Coffee 0.323 0.815 0.223 0.381
Buy Tea 0.318 0.810 0.226 0.398
Decision Patterns by Speed
reaction_time Fast Medium Slow
decision_label
Buy Coffee 56.000 59.600 58.900
Buy Tea 44.000 40.400 41.100
No description has been provided for this image
No description has been provided for this image

COMPREHENSIVE CROSS-SCENARIO ANALYSIS

Comparing behavioral patterns across all decision-making scenarios

Complete Scenario Comparison

Scenario Drift_Rate Boundary_Separation Starting_Point Non_Decision_Time Noise_SD N_Trials Coffee_Choices Tea_Choices Timeouts Coffee_Percentage Tea_Percentage Timeout_Percentage Mean_RT_Coffee Mean_RT_Tea Std_RT_Coffee Std_RT_Tea Median_RT_Overall IQR_RT_Overall
0 Scenario 1: The Standard Shopper 0.300 1.000 0.500 0.250 1.000 2000 1137 863 0 56.850 43.150 0.000 0.516 0.509 0.212 0.205 0.449 0.236
1 Scenario 2: The Coffee Lover (Biased) 0.300 1.000 0.750 0.250 1.000 2000 1621 379 0 81.050 18.950 0.000 0.412 0.579 0.189 0.201 0.372 0.212
2 Scenario 3: The Cautious Buyer (High Stakes) 0.300 2.000 1.000 0.250 1.000 2000 1272 717 11 63.600 35.850 0.550 1.223 1.215 0.707 0.705 1.035 0.858
3 Scenario 4: The Impulsive Buyer (Low Stakes) 0.300 0.600 0.300 0.250 1.000 2000 1122 878 0 56.100 43.900 0.000 0.351 0.349 0.081 0.079 0.327 0.090
4 Scenario 5: The Indecisive Shopper (Ambiguous) 0.000 1.000 0.500 0.250 1.000 2000 1024 976 0 51.200 48.800 0.000 0.519 0.519 0.225 0.224 0.447 0.244
5 Scenario 6: The Discount Hunter (Promotion on ... 0.800 1.000 0.500 0.250 1.000 2000 1411 589 0 70.550 29.450 0.000 0.502 0.496 0.207 0.205 0.438 0.222
6 Scenario 7: The High-Pressure Sale (Limited-Ti... 0.300 0.500 0.250 0.250 1.000 2000 1063 937 0 53.150 46.850 0.000 0.321 0.324 0.058 0.061 0.305 0.066
7 Scenario 8: Analysis Paralysis (Conflicting In... 0.050 1.500 0.750 0.250 2.500 2000 989 1011 0 49.450 50.550 0.000 0.350 0.348 0.083 0.080 0.325 0.086
8 Scenario 9: The Queue Effect (Dynamic Boundaries) 0.300 1.000 0.500 0.250 1.000 2000 1163 837 0 58.150 41.850 0.000 0.510 0.507 0.194 0.202 0.450 0.225

Statistical Comparisons Between Scenarios

Scenario_1 Scenario_2 Mean_RT_Diff Choice_Prop_Diff T_Test_Statistic T_Test_P_Value KS_Test_Statistic KS_Test_P_Value Significant_RT_Diff Significant_Dist_Diff
0 Scenario 1: The Standard Shopper Scenario 2: The Coffee Lover (Biased) 0.069 -0.242 10.667 0.000 0.247 0.000 True True
1 Scenario 1: The Standard Shopper Scenario 3: The Cautious Buyer (High Stakes) -0.707 -0.068 -42.945 0.000 0.593 0.000 True True
2 Scenario 1: The Standard Shopper Scenario 4: The Impulsive Buyer (Low Stakes) 0.163 0.007 32.535 0.000 0.447 0.000 True True
3 Scenario 1: The Standard Shopper Scenario 5: The Indecisive Shopper (Ambiguous) -0.006 0.056 -0.842 0.400 0.026 0.509 False False
4 Scenario 1: The Standard Shopper Scenario 6: The Discount Hunter (Promotion on ... 0.013 -0.137 1.969 0.049 0.043 0.050 True True
5 Scenario 1: The Standard Shopper Scenario 7: The High-Pressure Sale (Limited-Ti... 0.190 0.037 39.237 0.000 0.566 0.000 True True
6 Scenario 1: The Standard Shopper Scenario 8: Analysis Paralysis (Conflicting In... 0.164 0.074 32.633 0.000 0.460 0.000 True True
7 Scenario 1: The Standard Shopper Scenario 9: The Queue Effect (Dynamic Boundaries) 0.004 -0.013 0.658 0.511 0.033 0.226 False False
8 Scenario 2: The Coffee Lover (Biased) Scenario 3: The Cautious Buyer (High Stakes) -0.777 0.174 -47.272 0.000 0.667 0.000 True True
9 Scenario 2: The Coffee Lover (Biased) Scenario 4: The Impulsive Buyer (Low Stakes) 0.093 0.249 19.186 0.000 0.248 0.000 True True
10 Scenario 2: The Coffee Lover (Biased) Scenario 5: The Indecisive Shopper (Ambiguous) -0.075 0.298 -11.115 0.000 0.247 0.000 True True
11 Scenario 2: The Coffee Lover (Biased) Scenario 6: The Discount Hunter (Promotion on ... -0.056 0.105 -8.722 0.000 0.231 0.000 True True
12 Scenario 2: The Coffee Lover (Biased) Scenario 7: The High-Pressure Sale (Limited-Ti... 0.121 0.279 25.685 0.000 0.336 0.000 True True
13 Scenario 2: The Coffee Lover (Biased) Scenario 8: Analysis Paralysis (Conflicting In... 0.094 0.316 19.317 0.000 0.256 0.000 True True
14 Scenario 2: The Coffee Lover (Biased) Scenario 9: The Queue Effect (Dynamic Boundaries) -0.065 0.229 -10.303 0.000 0.253 0.000 True True
15 Scenario 3: The Cautious Buyer (High Stakes) Scenario 4: The Impulsive Buyer (Low Stakes) 0.870 0.075 54.745 0.000 0.863 0.000 True True
16 Scenario 3: The Cautious Buyer (High Stakes) Scenario 5: The Indecisive Shopper (Ambiguous) 0.701 0.124 42.327 0.000 0.589 0.000 True True
17 Scenario 3: The Cautious Buyer (High Stakes) Scenario 6: The Discount Hunter (Promotion on ... 0.720 -0.070 43.766 0.000 0.612 0.000 True True
18 Scenario 3: The Cautious Buyer (High Stakes) Scenario 7: The High-Pressure Sale (Limited-Ti... 0.898 0.105 56.654 0.000 0.915 0.000 True True
19 Scenario 3: The Cautious Buyer (High Stakes) Scenario 8: Analysis Paralysis (Conflicting In... 0.871 0.142 54.787 0.000 0.866 0.000 True True
20 Scenario 3: The Cautious Buyer (High Stakes) Scenario 9: The Queue Effect (Dynamic Boundaries) 0.711 0.054 43.389 0.000 0.596 0.000 True True
21 Scenario 4: The Impulsive Buyer (Low Stakes) Scenario 5: The Indecisive Shopper (Ambiguous) -0.169 0.049 -31.604 0.000 0.433 0.000 True True
22 Scenario 4: The Impulsive Buyer (Low Stakes) Scenario 6: The Discount Hunter (Promotion on ... -0.150 -0.144 -30.214 0.000 0.418 0.000 True True
23 Scenario 4: The Impulsive Buyer (Low Stakes) Scenario 7: The High-Pressure Sale (Limited-Ti... 0.028 0.030 12.437 0.000 0.169 0.000 True True
24 Scenario 4: The Impulsive Buyer (Low Stakes) Scenario 8: Analysis Paralysis (Conflicting In... 0.001 0.067 0.339 0.735 0.024 0.612 False False
25 Scenario 4: The Impulsive Buyer (Low Stakes) Scenario 9: The Queue Effect (Dynamic Boundaries) -0.158 -0.020 -33.272 0.000 0.445 0.000 True True
26 Scenario 5: The Indecisive Shopper (Ambiguous) Scenario 6: The Discount Hunter (Promotion on ... 0.019 -0.194 2.742 0.006 0.045 0.032 True True
27 Scenario 5: The Indecisive Shopper (Ambiguous) Scenario 7: The High-Pressure Sale (Limited-Ti... 0.196 -0.019 37.783 0.000 0.558 0.000 True True
28 Scenario 5: The Indecisive Shopper (Ambiguous) Scenario 8: Analysis Paralysis (Conflicting In... 0.169 0.018 31.702 0.000 0.448 0.000 True True
29 Scenario 5: The Indecisive Shopper (Ambiguous) Scenario 9: The Queue Effect (Dynamic Boundaries) 0.010 -0.070 1.496 0.135 0.036 0.139 False False
30 Scenario 6: The Discount Hunter (Promotion on ... Scenario 7: The High-Pressure Sale (Limited-Ti... 0.178 0.174 36.916 0.000 0.541 0.000 True True
31 Scenario 6: The Discount Hunter (Promotion on ... Scenario 8: Analysis Paralysis (Conflicting In... 0.151 0.211 30.318 0.000 0.429 0.000 True True
32 Scenario 6: The Discount Hunter (Promotion on ... Scenario 9: The Queue Effect (Dynamic Boundaries) -0.009 0.124 -1.363 0.173 0.036 0.139 False False
33 Scenario 7: The High-Pressure Sale (Limited-Ti... Scenario 8: Analysis Paralysis (Conflicting In... -0.027 0.037 -11.911 0.000 0.158 0.000 True True
34 Scenario 7: The High-Pressure Sale (Limited-Ti... Scenario 9: The Queue Effect (Dynamic Boundaries) -0.186 -0.050 -40.409 0.000 0.564 0.000 True True
35 Scenario 8: Analysis Paralysis (Conflicting In... Scenario 9: The Queue Effect (Dynamic Boundaries) -0.159 -0.087 -33.369 0.000 0.460 0.000 True True

Sample Trial-by-Trial Data

Scenario Trial_ID Decision_Raw Decision_Label Reaction_Time Drift_Rate Boundary_Separation Bias_Toward_Coffee
0 Scenario 1: The Standard Shopper 1 1 Coffee 0.507 0.300 1.000 False
1 Scenario 1: The Standard Shopper 2 1 Coffee 0.355 0.300 1.000 False
2 Scenario 1: The Standard Shopper 3 -1 Tea 0.529 0.300 1.000 False
3 Scenario 1: The Standard Shopper 4 1 Coffee 0.436 0.300 1.000 False
4 Scenario 1: The Standard Shopper 5 -1 Tea 0.451 0.300 1.000 False
5 Scenario 1: The Standard Shopper 6 1 Coffee 0.379 0.300 1.000 False
6 Scenario 1: The Standard Shopper 7 -1 Tea 0.831 0.300 1.000 False
7 Scenario 1: The Standard Shopper 8 1 Coffee 0.353 0.300 1.000 False
8 Scenario 1: The Standard Shopper 9 -1 Tea 0.360 0.300 1.000 False
9 Scenario 1: The Standard Shopper 10 -1 Tea 0.416 0.300 1.000 False
10 Scenario 1: The Standard Shopper 11 -1 Tea 0.586 0.300 1.000 False
11 Scenario 1: The Standard Shopper 12 1 Coffee 0.848 0.300 1.000 False
12 Scenario 1: The Standard Shopper 13 -1 Tea 0.376 0.300 1.000 False
13 Scenario 1: The Standard Shopper 14 -1 Tea 0.480 0.300 1.000 False
14 Scenario 1: The Standard Shopper 15 -1 Tea 0.319 0.300 1.000 False
15 Scenario 1: The Standard Shopper 16 1 Coffee 0.607 0.300 1.000 False
16 Scenario 1: The Standard Shopper 17 -1 Tea 0.356 0.300 1.000 False
17 Scenario 1: The Standard Shopper 18 1 Coffee 0.319 0.300 1.000 False
18 Scenario 1: The Standard Shopper 19 1 Coffee 0.427 0.300 1.000 False
19 Scenario 1: The Standard Shopper 20 1 Coffee 0.608 0.300 1.000 False
No description has been provided for this image

Scenario Reference Guide

  • S1: Scenario 1: The Standard Shopper
  • S2: Scenario 2: The Coffee Lover (Biased)
  • S3: Scenario 3: The Cautious Buyer (High Stakes)
  • S4: Scenario 4: The Impulsive Buyer (Low Stakes)
  • S5: Scenario 5: The Indecisive Shopper (Ambiguous)
  • S6: Scenario 6: The Discount Hunter (Promotion on Coffee)
  • S7: Scenario 7: The High-Pressure Sale (Limited-Time Offer)
  • S8: Scenario 8: Analysis Paralysis (Conflicting Information)
  • S9: Scenario 9: The Queue Effect (Dynamic Boundaries)

Below is the full Python code to support the output above.

In [ ]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from matplotlib.lines import Line2D
import matplotlib.patches as patches
from matplotlib.colors import to_rgba
import colorsys
from matplotlib.ticker import FormatStrFormatter, MultipleLocator
from matplotlib.patches import FancyArrowPatch
from IPython.display import display, HTML
import seaborn as sns
from scipy import stats
from scipy.stats import ttest_ind, ks_2samp
import warnings
warnings.filterwarnings('ignore')

# Set style for enhanced visualizations
plt.style.use('default')  # Use default matplotlib style instead of seaborn
# Remove the seaborn palette setting to keep original colors

# Ensure pandas displays floats with desired precision
pd.options.display.float_format = '{:,.3f}'.format

# ===================================================================================
# ==== CORE DDM SIMULATION AND PLOTTING FUNCTIONS (Functionality Preserved) ====
# ===================================================================================

def simulate_ddm_trial(drift_rate, boundary_separation, non_decision_time, 
                      starting_point=None, dt=0.001, max_time=10.0, noise_sd=1.0):
    """Simulate a single Drift Diffusion Model trial."""
    if starting_point is None:
        starting_point = boundary_separation / 2
    evidence = starting_point
    time = 0
    evidence_trace = [evidence]
    time_points = [0]
    while (0 < evidence < boundary_separation) and (time < max_time):
        time += dt
        evidence += drift_rate * dt + np.random.normal(0, noise_sd * np.sqrt(dt))
        evidence_trace.append(evidence)
        time_points.append(time)
    
    if evidence >= boundary_separation: decision = 1
    elif evidence <= 0: decision = -1
    else: decision = 0
    reaction_time = time + non_decision_time
    return decision, reaction_time, evidence_trace, time_points

def simulate_ddm_trial_dynamic_boundaries(drift_rate, initial_boundary_separation, non_decision_time, 
                                        starting_point=None, dt=0.001, max_time=10.0, noise_sd=1.0,
                                        queue_pressure=0.0, pressure_onset=0.5):
    """
    Simulate DDM trial with dynamic boundaries that change due to external pressure.
    
    Parameters:
    - queue_pressure: How much the boundaries move (0.0 = no change, 1.0 = significant change)
    - pressure_onset: Time when pressure effects start (in seconds)
    """
    if starting_point is None:
        starting_point = initial_boundary_separation / 2
    
    evidence = starting_point
    time = 0
    evidence_trace = [evidence]
    time_points = [0]
    upper_boundary_trace = [initial_boundary_separation]
    lower_boundary_trace = [0]
    
    while time < max_time:
        time += dt
        
        # Calculate dynamic boundaries based on queue pressure
        if time >= pressure_onset and queue_pressure > 0:
            # Pressure factor increases over time
            pressure_factor = min(1.0, (time - pressure_onset) * queue_pressure)
            
            # Upper boundary (coffee) becomes easier to reach (moves DOWN due to impatience)
            current_upper = max(0.2, initial_boundary_separation * (1 - 0.3 * pressure_factor))
            # Lower boundary (tea) becomes easier to reach (moves UP from 0)
            current_lower = min(current_upper - 0.1, 0.2 * pressure_factor * initial_boundary_separation)
        else:
            current_upper = initial_boundary_separation
            current_lower = 0
        
        # Update evidence
        evidence += drift_rate * dt + np.random.normal(0, noise_sd * np.sqrt(dt))
        
        # Check if evidence crosses boundaries
        if evidence >= current_upper:
            decision = 1
            # Add final points for synchronization
            evidence_trace.append(evidence)
            time_points.append(time)
            upper_boundary_trace.append(current_upper)
            lower_boundary_trace.append(current_lower)
            break
        elif evidence <= current_lower:
            decision = -1
            # Add final points for synchronization
            evidence_trace.append(evidence)
            time_points.append(time)
            upper_boundary_trace.append(current_upper)
            lower_boundary_trace.append(current_lower)
            break
        
        # Continue simulation - add current points
        evidence_trace.append(evidence)
        time_points.append(time)
        upper_boundary_trace.append(current_upper)
        lower_boundary_trace.append(current_lower)
    else:
        decision = 0  # Timeout
    
    reaction_time = time + non_decision_time
    return decision, reaction_time, evidence_trace, time_points, upper_boundary_trace, lower_boundary_trace

def run_ddm_simulation(n_trials, drift_rate, boundary_separation, 
                      non_decision_time, starting_point=None, **kwargs):
    """Run multiple DDM trials and return the raw results."""
    decisions, reaction_times, traces, time_points = [], [], [], []
    for _ in range(n_trials):
        decision, rt, trace, times = simulate_ddm_trial(
            drift_rate, boundary_separation, non_decision_time, starting_point, **kwargs)
        decisions.append(decision)
        reaction_times.append(rt)
        traces.append(trace)
        time_points.append(times)
    return np.array(decisions), np.array(reaction_times), traces, time_points

def run_ddm_simulation_dynamic(n_trials, drift_rate, boundary_separation, 
                             non_decision_time, starting_point=None, **kwargs):
    """Run multiple DDM trials with dynamic boundaries."""
    decisions, reaction_times, traces, time_points = [], [], [], []
    upper_traces, lower_traces = [], []
    
    for _ in range(n_trials):
        result = simulate_ddm_trial_dynamic_boundaries(
            drift_rate, boundary_separation, non_decision_time, starting_point, **kwargs)
        decision, rt, trace, times, upper_boundary, lower_boundary = result
        
        decisions.append(decision)
        reaction_times.append(rt)
        traces.append(trace)
        time_points.append(times)
        upper_traces.append(upper_boundary)
        lower_traces.append(lower_boundary)
    
    return np.array(decisions), np.array(reaction_times), traces, time_points, upper_traces, lower_traces

def generate_trace_colors(n_upper, n_lower, decision_a_color, decision_b_color):
    """Generate colors for traces based on their decision outcome."""
    decision_a_rgba = to_rgba(decision_a_color)
    decision_b_rgba = to_rgba(decision_b_color)
    decision_a_hsv = colorsys.rgb_to_hsv(*decision_a_rgba[:3])
    decision_b_hsv = colorsys.rgb_to_hsv(*decision_b_rgba[:3])
    colors_a, colors_b = [], []
    for _ in range(n_upper):
        h_offset, s_offset = np.random.uniform(-0.05, 0.05), np.random.uniform(-0.1, 0.1)
        new_s = max(0, min(1, decision_a_hsv[1] + s_offset))
        new_hsv = ((decision_a_hsv[0] + h_offset) % 1.0, new_s, decision_a_hsv[2])
        colors_a.append((*colorsys.hsv_to_rgb(*new_hsv), decision_a_rgba[3]))
    for _ in range(n_lower):
        h_offset, s_offset = np.random.uniform(-0.05, 0.05), np.random.uniform(-0.1, 0.1)
        new_s = max(0, min(1, decision_b_hsv[1] + s_offset))
        new_hsv = ((decision_b_hsv[0] + h_offset) % 1.0, new_s, decision_b_hsv[2])
        colors_b.append((*colorsys.hsv_to_rgb(*new_hsv), decision_b_rgba[3]))
    return colors_a, colors_b

def plot_enhanced_ddm_visualization(
    decisions, reaction_times, traces, time_points, 
    drift_rate, boundary_separation, non_decision_time, n_trials,
    starting_point=None, display_traces=15, figsize=(14, 10), 
    max_time=3.0, noise_sd=1.0, connection_style='lines', title_override=None,
    decision_a_label="Decision A", decision_b_label="Decision B", **kwargs):
    """Create an enhanced visualization from pre-computed DDM simulation results."""
    if starting_point is None: starting_point = boundary_separation / 2

    upper_boundary_trials = np.where(decisions == 1)[0]
    lower_boundary_trials = np.where(decisions == -1)[0]
    no_decision_trials = np.where(decisions == 0)[0]
    
    valid_time_points = [t for t in time_points if t]
    actual_max_time = (max_time + non_decision_time) if not valid_time_points else min(max_time + non_decision_time, np.max([np.max(t) for t in valid_time_points]) + non_decision_time * 1.1)
    
    # Keep original styling
    decision_a_color = '#8B4513'; decision_b_color = '#A0522D'; drift_color = '#D2691E'
    boundary_color = '#CD853F'; starting_point_color = '#B8860B'; non_decision_color = '#800000'
    
    fig = plt.figure(figsize=figsize)
    gs = gridspec.GridSpec(3, 1, height_ratios=[1, 3, 1])
    seconds_formatter = FormatStrFormatter('%.1f s')
    tick_max = max_time + non_decision_time
    tick_spacing = 0.2 if tick_max <= 1.0 else (0.5 if tick_max <= 2.0 else 1.0)
    
    ax_top = plt.subplot(gs[0])
    if len(upper_boundary_trials) > 0:
        upper_rts = reaction_times[decisions == 1]
        ax_top.hist(upper_rts, bins=np.linspace(0, max_time + non_decision_time, 30), alpha=0.7, color=decision_a_color)
        ax_top.set_title(f'{decision_a_label} Choice Times (n={len(upper_rts)})', fontsize=10)
    else: ax_top.set_title(f'{decision_a_label} Choice Times (no trials)', fontsize=10)
    ax_top.set_xlim(0, actual_max_time); ax_top.set_ylabel('Frequency', fontsize=14, fontweight='bold')
    ax_top.xaxis.set_major_formatter(seconds_formatter); ax_top.xaxis.set_major_locator(MultipleLocator(tick_spacing)); ax_top.set_xticklabels([])
    
    ax_main = plt.subplot(gs[1], sharex=ax_top)
    display_indices = []
    desired_upper = min(display_traces // 2, len(upper_boundary_trials))
    desired_lower = min(display_traces - desired_upper, len(lower_boundary_trials))
    if desired_upper > 0: display_indices.extend(np.random.choice(upper_boundary_trials, desired_upper, replace=False))
    if desired_lower > 0: display_indices.extend(np.random.choice(lower_boundary_trials, desired_lower, replace=False))
    upper_display_trials = [i for i in display_indices if decisions[i] == 1]
    lower_display_trials = [i for i in display_indices if decisions[i] == -1]
    trace_colors_a, trace_colors_b = generate_trace_colors(len(upper_display_trials), len(lower_display_trials), decision_a_color, decision_b_color)
    
    for idx in display_indices:
        trace = traces[idx]; times = np.array(time_points[idx]) + non_decision_time; decision = decisions[idx]
        if decision == 1: color = trace_colors_a[upper_display_trials.index(idx) % len(trace_colors_a)]
        elif decision == -1: color = trace_colors_b[lower_display_trials.index(idx) % len(trace_colors_b)]
        else: color = 'gray'
        ax_main.plot(times, trace, color=color, linewidth=1.5, alpha=0.7)

    ax_main.axhline(y=boundary_separation, color=decision_a_color, lw=4)
    ax_main.axhline(y=0, color=decision_b_color, lw=4)
    ax_main.scatter([non_decision_time], [starting_point], color=starting_point_color, s=200, marker='o', zorder=5, edgecolor='black', lw=1.5)
    ax_main.text(non_decision_time + 0.03, starting_point, 'Start Point', fontsize=14, fontweight='bold', color=starting_point_color, va='center', bbox=dict(facecolor='white', alpha=0.7, boxstyle='round'))
    
    arrow_x = actual_max_time * 0.75
    ax_main.add_patch(FancyArrowPatch((arrow_x, starting_point), (arrow_x, boundary_separation), arrowstyle='-|>', linewidth=2, color=boundary_color, mutation_scale=15))
    ax_main.add_patch(FancyArrowPatch((arrow_x, starting_point), (arrow_x, 0), arrowstyle='-|>', linewidth=2, color=boundary_color, mutation_scale=15))
    ax_main.text(arrow_x, boundary_separation / 2, 'Boundary\nSeparation', ha='center', va='center', color=boundary_color, fontsize=14, fontweight='bold', bbox=dict(facecolor='white', alpha=0.7, boxstyle='round'))
    
    target_boundary = boundary_separation if drift_rate >= 0 else 0
    time_to_boundary = abs((target_boundary - starting_point) / drift_rate) if drift_rate != 0 else max_time
    arrow_end_x = min(non_decision_time + time_to_boundary, actual_max_time * 0.7)
    arrow_end_y = starting_point + drift_rate * (arrow_end_x - non_decision_time)
    arrow_end_y = np.clip(arrow_end_y, 0, boundary_separation)
    ax_main.add_patch(FancyArrowPatch((non_decision_time, starting_point), (arrow_end_x, arrow_end_y), arrowstyle='-|>', linewidth=2, color=drift_color, mutation_scale=15))
    mid_x, mid_y = (non_decision_time + arrow_end_x) / 2, (starting_point + arrow_end_y) / 2
    ax_main.text(mid_x, mid_y + 0.1, 'Drift Rate', fontsize=14, fontweight='bold', color=drift_color, va='center', ha='center', bbox=dict(facecolor='white', alpha=0.7, boxstyle='round'))

    ax_main.add_patch(FancyArrowPatch((0, starting_point - 0.1), (non_decision_time, starting_point - 0.1), arrowstyle='-|>', linewidth=2, color=non_decision_color, mutation_scale=15))
    ax_main.text(non_decision_time / 2, starting_point - 0.15, 'Non-Decision Time', fontsize=14, fontweight='bold', color=non_decision_color, ha='center', va='top', bbox=dict(facecolor='white', alpha=0.7, boxstyle='round'))
    
    decision_label_x = actual_max_time * 0.6
    ax_main.text(decision_label_x, boundary_separation, decision_a_label, ha='right', va='bottom', color=decision_a_color, fontsize=14, fontweight='bold', bbox=dict(facecolor='white', alpha=0.7, boxstyle='round'))
    ax_main.text(decision_label_x, 0, decision_b_label, ha='right', va='top', color=decision_b_color, fontsize=14, fontweight='bold', bbox=dict(facecolor='white', alpha=0.7, boxstyle='round'))
    
    bias_value = starting_point / boundary_separation
    if np.isclose(bias_value, 0.5): bias_desc = "Unbiased"
    elif bias_value > 0.5: bias_desc = f"Biased toward\n{decision_a_label}\n({bias_value:.2f})"
    else: bias_desc = f"Biased toward\n{decision_b_label}\n({bias_value:.2f})"
    ax_main.text(non_decision_time - 0.05, starting_point, bias_desc, fontsize=12, fontweight='bold', color='#FF8C00', va='center', ha='right', bbox=dict(facecolor='white', alpha=0.7, boxstyle='round'))

    ax_bottom = plt.subplot(gs[2], sharex=ax_top)
    if len(lower_boundary_trials) > 0:
        lower_rts = reaction_times[decisions == -1]
        ax_bottom.hist(lower_rts, bins=np.linspace(0, max_time + non_decision_time, 30), alpha=0.7, color=decision_b_color)
        ax_bottom.invert_yaxis()
        ax_bottom.set_title(f'{decision_b_label} Choice Times (n={len(lower_rts)})', fontsize=10)
    else:
        ax_bottom.set_title(f'{decision_b_label} Choice Times (no trials)', fontsize=10)
        ax_bottom.invert_yaxis()
    ax_bottom.set_xlabel('Time (seconds)', fontsize=14, fontweight='bold'); ax_bottom.set_ylabel('Frequency (inv)', fontsize=14, fontweight='bold')
    ax_bottom.set_xlim(0, actual_max_time); ax_bottom.xaxis.set_major_formatter(seconds_formatter); ax_bottom.xaxis.set_major_locator(MultipleLocator(tick_spacing))
    plt.setp(ax_bottom.get_xticklabels(), fontsize=16, fontweight='bold', visible=True); ax_bottom.tick_params(axis='x', which='major', labelsize=16, pad=10)
    
    ax_main.set_ylabel('Accumulated Evidence', fontsize=14, fontweight='bold'); ax_main.set_xlim(0, actual_max_time)
    ax_main.set_ylim(-0.2, boundary_separation * 1.2); ax_main.set_xlabel('Time (seconds)', fontsize=14, fontweight='bold')
    ax_main.xaxis.set_major_formatter(seconds_formatter); ax_main.xaxis.set_major_locator(MultipleLocator(tick_spacing)); ax_main.set_xticklabels([])

    upper_pct, lower_pct, timeout_pct = np.mean(decisions == 1) * 100, np.mean(decisions == -1) * 100, np.mean(decisions == 0) * 100
    model_summary = (f"Model Summary (n={n_trials}):\n{decision_a_label}: {upper_pct:.1f}%\n{decision_b_label}: {lower_pct:.1f}%\nTimeouts: {timeout_pct:.1f}%")
    ax_main.text(0.02, 0.98, model_summary, transform=ax_main.transAxes, fontsize=12, fontweight='bold', va='top', bbox=dict(boxstyle='round', facecolor='#FFF8DC', edgecolor='#8B4513', alpha=0.9))

    if np.isclose(bias_value, 0.5): bias_label = f'Bias: {bias_value:.2f} (Unbiased)'
    elif bias_value > 0.5: bias_label = f'Bias: {bias_value:.2f} (Toward {decision_a_label})'
    else: bias_label = f'Bias: {bias_value:.2f} (Toward {decision_b_label})'
    legend_elements = [Line2D([0], [0], c=decision_a_color, lw=4, label=f'{decision_a_label} Boundary'), Line2D([0], [0], c=decision_b_color, lw=4, label=f'{decision_b_label} Boundary'), Line2D([0], [0], c=drift_color, lw=4, label=f'Drift Rate: {drift_rate:.2f}'), Line2D([0], [0], c=boundary_color, lw=4, label=f'Boundary Separation: {boundary_separation:.2f}'), Line2D([0], [0], c=non_decision_color, lw=4, label=f'Non-Decision Time: {non_decision_time:.2f}s'), Line2D([0], [0], c=starting_point_color, marker='o', ls='', ms=12, label=f'Starting Point: {starting_point:.2f}'), Line2D([0], [0], c='#FF8C00', lw=4, label=bias_label), Line2D([0], [0], c='#654321', lw=4, alpha=0.7, label=f'Noise SD: {noise_sd:.2f}')]
    ax_main.legend(handles=legend_elements, loc='upper right', fontsize=12, framealpha=0.9)
    
    plt.tight_layout()
    main_title = title_override if title_override else f'DDM: {decision_a_label} vs. {decision_b_label}'
    plt.suptitle(main_title, fontsize=20, fontweight='bold', y=1.02, color='#8B4513', style='italic')
    
    return fig

def plot_dynamic_ddm_visualization(
    decisions, reaction_times, traces, time_points, upper_traces, lower_traces,
    drift_rate, boundary_separation, non_decision_time, n_trials,
    starting_point=None, display_traces=15, figsize=(14, 10), 
    max_time=3.0, noise_sd=1.0, title_override=None,
    decision_a_label="Decision A", decision_b_label="Decision B", 
    queue_pressure=0.0, pressure_onset=0.5, **kwargs):
    """Create visualization for dynamic boundary DDM."""
    
    if starting_point is None: starting_point = boundary_separation / 2

    upper_boundary_trials = np.where(decisions == 1)[0]
    lower_boundary_trials = np.where(decisions == -1)[0]
    
    valid_time_points = [t for t in time_points if t]
    actual_max_time = (max_time + non_decision_time) if not valid_time_points else min(max_time + non_decision_time, np.max([np.max(t) for t in valid_time_points]) + non_decision_time * 1.1)
    
    # Colors
    decision_a_color = '#8B4513'; decision_b_color = '#A0522D'; drift_color = '#D2691E'
    boundary_color = '#CD853F'; starting_point_color = '#B8860B'; non_decision_color = '#800000'
    dynamic_boundary_color = '#FF6B6B'
    
    fig = plt.figure(figsize=figsize)
    gs = gridspec.GridSpec(3, 1, height_ratios=[1, 3, 1])
    seconds_formatter = FormatStrFormatter('%.1f s')
    tick_max = max_time + non_decision_time
    tick_spacing = 0.2 if tick_max <= 1.0 else (0.5 if tick_max <= 2.0 else 1.0)
    
    ax_top = plt.subplot(gs[0])
    if len(upper_boundary_trials) > 0:
        upper_rts = reaction_times[decisions == 1]
        ax_top.hist(upper_rts, bins=np.linspace(0, max_time + non_decision_time, 30), alpha=0.7, color=decision_a_color)
        ax_top.set_title(f'{decision_a_label} Choice Times (n={len(upper_rts)}) - Queue Pressure: {queue_pressure:.1f}', fontsize=10)
    else: 
        ax_top.set_title(f'{decision_a_label} Choice Times (no trials) - Queue Pressure: {queue_pressure:.1f}', fontsize=10)
    ax_top.set_xlim(0, actual_max_time); ax_top.set_ylabel('Frequency', fontsize=14, fontweight='bold')
    ax_top.xaxis.set_major_formatter(seconds_formatter); ax_top.xaxis.set_major_locator(MultipleLocator(tick_spacing)); ax_top.set_xticklabels([])
    
    ax_main = plt.subplot(gs[1], sharex=ax_top)
    
    # Show dynamic boundaries for a representative trial
    if upper_traces and lower_traces:
        # Use the longest trace for boundary demonstration
        max_trace_idx = np.argmax([len(trace) for trace in traces])
        demo_times = np.array(time_points[max_trace_idx]) + non_decision_time
        demo_upper = upper_traces[max_trace_idx]
        demo_lower = lower_traces[max_trace_idx]
        
        # Plot dynamic boundaries
        ax_main.plot(demo_times, demo_upper, color=dynamic_boundary_color, lw=3, linestyle='--', 
                    label=f'Dynamic Upper Boundary', alpha=0.8)
        ax_main.plot(demo_times, demo_lower, color=dynamic_boundary_color, lw=3, linestyle=':', 
                    label=f'Dynamic Lower Boundary', alpha=0.8)
    
    # Plot sample traces
    display_indices = []
    desired_upper = min(display_traces // 2, len(upper_boundary_trials))
    desired_lower = min(display_traces - desired_upper, len(lower_boundary_trials))
    if desired_upper > 0: display_indices.extend(np.random.choice(upper_boundary_trials, desired_upper, replace=False))
    if desired_lower > 0: display_indices.extend(np.random.choice(lower_boundary_trials, desired_lower, replace=False))
    
    upper_display_trials = [i for i in display_indices if decisions[i] == 1]
    lower_display_trials = [i for i in display_indices if decisions[i] == -1]
    trace_colors_a, trace_colors_b = generate_trace_colors(len(upper_display_trials), len(lower_display_trials), decision_a_color, decision_b_color)
    
    for idx in display_indices:
        trace = traces[idx]; times = np.array(time_points[idx]) + non_decision_time; decision = decisions[idx]
        if decision == 1: color = trace_colors_a[upper_display_trials.index(idx) % len(trace_colors_a)]
        elif decision == -1: color = trace_colors_b[lower_display_trials.index(idx) % len(trace_colors_b)]
        else: color = 'gray'
        ax_main.plot(times, trace, color=color, linewidth=1.5, alpha=0.7)

    # Original boundaries (for reference)
    ax_main.axhline(y=boundary_separation, color=decision_a_color, lw=2, alpha=0.5, linestyle='-', label='Original Upper Boundary')
    ax_main.axhline(y=0, color=decision_b_color, lw=2, alpha=0.5, linestyle='-', label='Original Lower Boundary')
    
    # Pressure onset line
    if queue_pressure > 0:
        ax_main.axvline(x=pressure_onset + non_decision_time, color='red', lw=2, linestyle='--', alpha=0.7, label=f'Queue Pressure Onset ({pressure_onset}s)')
    
    ax_main.scatter([non_decision_time], [starting_point], color=starting_point_color, s=200, marker='o', zorder=5, edgecolor='black', lw=1.5)
    ax_main.text(non_decision_time + 0.03, starting_point, 'Start Point', fontsize=14, fontweight='bold', color=starting_point_color, va='center', bbox=dict(facecolor='white', alpha=0.7, boxstyle='round'))
    
    # Queue pressure explanation
    if queue_pressure > 0:
        ax_main.text(0.02, 0.02, f'Queue Effect:\n• Upper boundary moves DOWN (easier)\n• Lower boundary moves UP (easier)\n• Pressure = {queue_pressure:.1f}', 
                    transform=ax_main.transAxes, fontsize=11, fontweight='bold', va='bottom', 
                    bbox=dict(boxstyle='round', facecolor='#FFE4E1', edgecolor='red', alpha=0.9))

    ax_bottom = plt.subplot(gs[2], sharex=ax_top)
    if len(lower_boundary_trials) > 0:
        lower_rts = reaction_times[decisions == -1]
        ax_bottom.hist(lower_rts, bins=np.linspace(0, max_time + non_decision_time, 30), alpha=0.7, color=decision_b_color)
        ax_bottom.invert_yaxis()
        ax_bottom.set_title(f'{decision_b_label} Choice Times (n={len(lower_rts)})', fontsize=10)
    else:
        ax_bottom.set_title(f'{decision_b_label} Choice Times (no trials)', fontsize=10)
        ax_bottom.invert_yaxis()
    ax_bottom.set_xlabel('Time (seconds)', fontsize=14, fontweight='bold'); ax_bottom.set_ylabel('Frequency (inv)', fontsize=14, fontweight='bold')
    ax_bottom.set_xlim(0, actual_max_time); ax_bottom.xaxis.set_major_formatter(seconds_formatter); ax_bottom.xaxis.set_major_locator(MultipleLocator(tick_spacing))
    plt.setp(ax_bottom.get_xticklabels(), fontsize=16, fontweight='bold', visible=True); ax_bottom.tick_params(axis='x', which='major', labelsize=16, pad=10)
    
    ax_main.set_ylabel('Accumulated Evidence', fontsize=14, fontweight='bold'); ax_main.set_xlim(0, actual_max_time)
    
    # Adjust y-limits for dynamic boundaries
    if upper_traces and lower_traces:
        max_upper = max([max(trace) for trace in upper_traces if trace])
        min_lower = min([min(trace) for trace in lower_traces if trace])
        ax_main.set_ylim(min_lower - 0.1, max_upper * 1.1)
    else:
        ax_main.set_ylim(-0.2, boundary_separation * 1.2)
    
    ax_main.set_xlabel('Time (seconds)', fontsize=14, fontweight='bold')
    ax_main.xaxis.set_major_formatter(seconds_formatter); ax_main.xaxis.set_major_locator(MultipleLocator(tick_spacing)); ax_main.set_xticklabels([])

    upper_pct, lower_pct, timeout_pct = np.mean(decisions == 1) * 100, np.mean(decisions == -1) * 100, np.mean(decisions == 0) * 100
    model_summary = (f"Dynamic DDM Summary (n={n_trials}):\n{decision_a_label}: {upper_pct:.1f}%\n{decision_b_label}: {lower_pct:.1f}%\nTimeouts: {timeout_pct:.1f}%")
    ax_main.text(0.98, 0.98, model_summary, transform=ax_main.transAxes, fontsize=12, fontweight='bold', va='top', ha='right', bbox=dict(boxstyle='round', facecolor='#FFF8DC', edgecolor='#8B4513', alpha=0.9))

    # Legend for dynamic boundaries
    ax_main.legend(loc='upper left', fontsize=10, framealpha=0.9)
    
    plt.tight_layout()
    main_title = title_override if title_override else f'Dynamic DDM: {decision_a_label} vs. {decision_b_label} (Queue Effect)'
    plt.suptitle(main_title, fontsize=20, fontweight='bold', y=1.02, color='#8B4513', style='italic')
    
    return fig

# ===================================================================================
# ==== NEW ENHANCED VISUALIZATION FUNCTIONS ====
# ===================================================================================

def plot_additional_analyses(decisions, reaction_times, scenario_name, decision_a_label, decision_b_label):
    """Create additional analytical visualizations."""
    fig, axes = plt.subplots(2, 3, figsize=(18, 12))
    fig.suptitle(f'Extended Analysis: {scenario_name}', fontsize=16, fontweight='bold')
    
    # 1. Detailed RT distributions with overlays
    valid_decisions = decisions != 0
    valid_rts = reaction_times[valid_decisions]
    valid_dec = decisions[valid_decisions]
    
    ax = axes[0, 0]
    if np.sum(valid_dec == 1) > 0:
        ax.hist(reaction_times[decisions == 1], bins=30, alpha=0.6, label=decision_a_label, color='skyblue', density=True)
    if np.sum(valid_dec == -1) > 0:
        ax.hist(reaction_times[decisions == -1], bins=30, alpha=0.6, label=decision_b_label, color='lightcoral', density=True)
    ax.set_xlabel('Reaction Time (s)')
    ax.set_ylabel('Density')
    ax.set_title('RT Distribution Comparison')
    ax.legend()
    ax.grid(True, alpha=0.3)
    
    # 2. Cumulative Distribution Functions
    ax = axes[0, 1]
    if np.sum(valid_dec == 1) > 0:
        rt_a = reaction_times[decisions == 1]
        sorted_rt_a = np.sort(rt_a)
        y_a = np.arange(1, len(sorted_rt_a) + 1) / len(sorted_rt_a)
        ax.plot(sorted_rt_a, y_a, label=f'{decision_a_label} (n={len(rt_a)})', linewidth=2)
    
    if np.sum(valid_dec == -1) > 0:
        rt_b = reaction_times[decisions == -1]
        sorted_rt_b = np.sort(rt_b)
        y_b = np.arange(1, len(sorted_rt_b) + 1) / len(sorted_rt_b)
        ax.plot(sorted_rt_b, y_b, label=f'{decision_b_label} (n={len(rt_b)})', linewidth=2)
    
    ax.set_xlabel('Reaction Time (s)')
    ax.set_ylabel('Cumulative Probability')
    ax.set_title('Cumulative Distribution Functions')
    ax.legend()
    ax.grid(True, alpha=0.3)
    
    # 3. Speed-Accuracy Relationship
    ax = axes[0, 2]
    if len(valid_rts) > 20:  # Need sufficient data for binning
        n_bins = min(10, len(valid_rts) // 10)
        rt_bins = np.percentile(valid_rts, np.linspace(0, 100, n_bins + 1))
        bin_centers = []
        accuracies = []
        
        for i in range(len(rt_bins) - 1):
            mask = (valid_rts >= rt_bins[i]) & (valid_rts < rt_bins[i + 1])
            if np.sum(mask) > 0:
                bin_center = np.mean(valid_rts[mask])
                # Define "accuracy" as proportion of faster choice (assuming faster = more confident)
                bin_decisions = valid_dec[mask]
                accuracy = np.mean(bin_decisions == 1) if np.mean(valid_dec == 1) > 0.5 else np.mean(bin_decisions == -1)
                bin_centers.append(bin_center)
                accuracies.append(accuracy)
        
        if len(bin_centers) > 1:
            ax.scatter(bin_centers, accuracies, s=60, alpha=0.7)
            ax.plot(bin_centers, accuracies, '-', alpha=0.5)
    
    ax.set_xlabel('Reaction Time (s)')
    ax.set_ylabel('Choice Consistency')
    ax.set_title('Speed-Accuracy Trade-off')
    ax.grid(True, alpha=0.3)
    
    # 4. Choice proportion over time
    ax = axes[1, 0]
    if len(valid_rts) > 50:
        # Create time bins and calculate choice proportions
        time_bins = np.linspace(np.min(valid_rts), np.max(valid_rts), 15)
        bin_centers = []
        choice_props = []
        
        for i in range(len(time_bins) - 1):
            mask = (valid_rts >= time_bins[i]) & (valid_rts < time_bins[i + 1])
            if np.sum(mask) > 2:
                bin_center = (time_bins[i] + time_bins[i + 1]) / 2
                prop_a = np.mean(valid_dec[mask] == 1)
                bin_centers.append(bin_center)
                choice_props.append(prop_a)
        
        if len(bin_centers) > 1:
            ax.plot(bin_centers, choice_props, 'o-', linewidth=2, markersize=6)
            ax.axhline(y=0.5, color='gray', linestyle='--', alpha=0.7)
    
    ax.set_xlabel('Reaction Time (s)')
    ax.set_ylabel(f'Proportion {decision_a_label}')
    ax.set_title('Choice Proportion vs. RT')
    ax.grid(True, alpha=0.3)
    
    # 5. RT Percentiles comparison
    ax = axes[1, 1]
    percentiles = [10, 25, 50, 75, 90]
    if np.sum(valid_dec == 1) > 0 and np.sum(valid_dec == -1) > 0:
        perc_a = np.percentile(reaction_times[decisions == 1], percentiles)
        perc_b = np.percentile(reaction_times[decisions == -1], percentiles)
        
        x_pos = np.arange(len(percentiles))
        width = 0.35
        
        ax.bar(x_pos - width/2, perc_a, width, label=decision_a_label, alpha=0.7)
        ax.bar(x_pos + width/2, perc_b, width, label=decision_b_label, alpha=0.7)
        
        ax.set_xlabel('Percentile')
        ax.set_ylabel('Reaction Time (s)')
        ax.set_title('RT Percentile Comparison')
        ax.set_xticks(x_pos)
        ax.set_xticklabels([f'{p}th' for p in percentiles])
        ax.legend()
    
    # 6. Decision timeline
    ax = axes[1, 2]
    trial_numbers = np.arange(len(decisions))
    colors = ['red' if d == -1 else 'blue' if d == 1 else 'gray' for d in decisions]
    
    # Show only first 200 trials for clarity
    n_show = min(200, len(decisions))
    ax.scatter(trial_numbers[:n_show], reaction_times[:n_show], 
              c=colors[:n_show], alpha=0.6, s=20)
    ax.set_xlabel('Trial Number')
    ax.set_ylabel('Reaction Time (s)')
    ax.set_title(f'Decision Timeline (First {n_show} trials)')
    
    # Add legend
    from matplotlib.patches import Patch
    legend_elements = [Patch(facecolor='blue', label=decision_a_label),
                      Patch(facecolor='red', label=decision_b_label),
                      Patch(facecolor='gray', label='Timeout')]
    ax.legend(handles=legend_elements)
    
    plt.tight_layout()
    return fig

def interpret_extended_analysis(decisions, reaction_times, scenario_name, params, decision_a_label, decision_b_label):
    """Generate detailed interpretations of the extended analysis plots."""
    
    # Extract key metrics
    valid_decisions = decisions != 0
    valid_rts = reaction_times[valid_decisions]
    valid_dec = decisions[valid_decisions]
    
    coffee_rts = reaction_times[decisions == 1] if np.any(decisions == 1) else []
    tea_rts = reaction_times[decisions == -1] if np.any(decisions == -1) else []
    
    coffee_prop = np.mean(decisions == 1) * 100
    tea_prop = np.mean(decisions == -1) * 100
    timeout_prop = np.mean(decisions == 0) * 100
    
    # Calculate key statistics
    if len(valid_rts) > 0:
        median_rt = np.median(valid_rts)
        rt_range = np.max(valid_rts) - np.min(valid_rts)
        rt_std = np.std(valid_rts)
    else:
        median_rt = rt_range = rt_std = 0
    
    # Determine scenario characteristics based on parameters
    drift = params['drift_rate']
    boundary = params['boundary_separation']
    bias = params.get('starting_point', boundary/2) / boundary
    noise = params.get('noise_sd', 1.0)
    
    # Generate interpretation based on scenario type
    scenario_type = ""
    if abs(drift) > 0.5:
        scenario_type = "Strong Preference"
    elif abs(drift) < 0.1:
        scenario_type = "Indecisive/Ambiguous"
    elif boundary > 1.5:
        scenario_type = "Cautious/High Stakes"
    elif boundary < 0.7:
        scenario_type = "Impulsive/Low Stakes"
    elif abs(bias - 0.5) > 0.15:
        scenario_type = "Biased"
    else:
        scenario_type = "Standard"
    
    interpretation_html = f"""
    <div style="background: #f8f9fa; border: 1px solid #dee2e6; border-radius: 8px; padding: 15px; margin: 10px 0;">
        <h3 style="color: #495057; margin-bottom: 12px;">Extended Analysis Interpretation: {scenario_name}</h3>
        
        <div style="background: #ffffff; border: 1px solid #e9ecef; border-radius: 5px; padding: 12px; margin: 8px 0;">
            <h4 style="color: #007bff; margin-bottom: 8px;">1. RT Distribution Comparison (Top Left)</h4>
            <p style="font-size: 14px; color: #495057; margin: 0;">
                <strong>What it shows:</strong> Overlapping density curves of reaction times for both choices.<br>
                <strong>Key insight:</strong> {_interpret_rt_distributions(coffee_rts, tea_rts, scenario_type, median_rt)}
            </p>
        </div>
        
        <div style="background: #ffffff; border: 1px solid #e9ecef; border-radius: 5px; padding: 12px; margin: 8px 0;">
            <h4 style="color: #28a745; margin-bottom: 8px;">2. Cumulative Distribution Functions (Top Middle)</h4>
            <p style="font-size: 14px; color: #495057; margin: 0;">
                <strong>What it shows:</strong> The proportion of decisions completed by each time point.<br>
                <strong>Key insight:</strong> {_interpret_cdfs(coffee_rts, tea_rts, scenario_type, boundary)}
            </p>
        </div>
        
        <div style="background: #ffffff; border: 1px solid #e9ecef; border-radius: 5px; padding: 12px; margin: 8px 0;">
            <h4 style="color: #ffc107; margin-bottom: 8px;">3. Speed-Accuracy Trade-off (Top Right)</h4>
            <p style="font-size: 14px; color: #495057; margin: 0;">
                <strong>What it shows:</strong> Choice consistency across different reaction time bins.<br>
                <strong>Key insight:</strong> {_interpret_speed_accuracy(scenario_type, boundary, drift, rt_std)}
            </p>
        </div>
        
        <div style="background: #ffffff; border: 1px solid #e9ecef; border-radius: 5px; padding: 12px; margin: 8px 0;">
            <h4 style="color: #dc3545; margin-bottom: 8px;">4. Choice Proportion vs. RT (Bottom Left)</h4>
            <p style="font-size: 14px; color: #495057; margin: 0;">
                <strong>What it shows:</strong> How choice preferences change across reaction time bins.<br>
                <strong>Key insight:</strong> {_interpret_choice_proportion_rt(drift, bias, scenario_type)}
            </p>
        </div>
        
        <div style="background: #ffffff; border: 1px solid #e9ecef; border-radius: 5px; padding: 12px; margin: 8px 0;">
            <h4 style="color: #6f42c1; margin-bottom: 8px;">5. RT Percentile Comparison (Bottom Middle)</h4>
            <p style="font-size: 14px; color: #495057; margin: 0;">
                <strong>What it shows:</strong> Detailed comparison of reaction time distributions using percentiles.<br>
                <strong>Key insight:</strong> {_interpret_percentiles(coffee_rts, tea_rts, scenario_type)}
            </p>
        </div>
        
        <div style="background: #ffffff; border: 1px solid #e9ecef; border-radius: 5px; padding: 12px; margin: 8px 0;">
            <h4 style="color: #17a2b8; margin-bottom: 8px;">6. Decision Timeline (Bottom Right)</h4>
            <p style="font-size: 14px; color: #495057; margin: 0;">
                <strong>What it shows:</strong> Reaction times and decisions across the first 200 trials.<br>
                <strong>Key insight:</strong> {_interpret_timeline(rt_std, scenario_type, timeout_prop)}
            </p>
        </div>
        
        <div style="background: #e7f3ff; border: 1px solid #b3d9ff; border-radius: 5px; padding: 12px; margin: 10px 0;">
            <h4 style="color: #0056b3; margin-bottom: 8px;">Overall Pattern Summary</h4>
            <p style="font-size: 14px; color: #0056b3; margin: 0;">
                <strong>Scenario Type:</strong> {scenario_type}<br>
                <strong>Decision Pattern:</strong> {coffee_prop:.1f}% {decision_a_label}, {tea_prop:.1f}% {decision_b_label}, {timeout_prop:.1f}% Timeouts<br>
                <strong>Speed Characteristics:</strong> {_summarize_speed_pattern(median_rt, boundary)}<br>
                <strong>Key Behavioral Insight:</strong> {_generate_key_insight(scenario_type, drift, boundary, bias, noise)}
            </p>
        </div>
    </div>
    """
    
    return interpretation_html

def _interpret_rt_distributions(coffee_rts, tea_rts, scenario_type, median_rt):
    """Interpret the RT distribution patterns."""
    if len(coffee_rts) == 0 or len(tea_rts) == 0:
        return "One choice dominates completely, showing extreme bias in the decision process."
    
    coffee_mean = np.mean(coffee_rts)
    tea_mean = np.mean(tea_rts)
    
    if scenario_type == "Impulsive/Low Stakes":
        return f"Both distributions are tightly clustered around {median_rt:.2f}s, confirming rapid, impulsive decision-making. Low boundary separation forces quick choices regardless of preference."
    elif scenario_type == "Cautious/High Stakes":
        return f"Distributions are broader and shifted right (mean ~{np.mean([coffee_mean, tea_mean]):.2f}s), reflecting deliberate evidence accumulation due to high decision thresholds."
    elif scenario_type == "Biased":
        if coffee_mean < tea_mean:
            return f"Coffee choices are faster (mean {coffee_mean:.2f}s) than tea choices (mean {tea_mean:.2f}s), indicating the bias reduces deliberation time for the preferred option."
        else:
            return f"Tea choices are faster, suggesting the bias creates asymmetric processing favoring one option over the other."
    elif scenario_type == "Indecisive/Ambiguous":
        return f"Both distributions overlap heavily around {median_rt:.2f}s, showing that without clear preferences, decision times are driven primarily by random noise."
    else:
        return f"Distributions show typical decision-making patterns with overlapping peaks around {median_rt:.2f}s, indicating normal evidence accumulation processes."

def _interpret_cdfs(coffee_rts, tea_rts, scenario_type, boundary):
    """Interpret the cumulative distribution function patterns."""
    if scenario_type == "Impulsive/Low Stakes":
        return "Both curves rise very steeply and plateau quickly, confirming that most decisions happen within the first 0.4-0.5 seconds due to low decision thresholds."
    elif scenario_type == "Cautious/High Stakes":
        return f"Curves rise gradually and plateau later, showing that the high boundary separation (={boundary:.1f}) requires more time for evidence accumulation before reaching a decision."
    elif scenario_type == "Strong Preference":
        return "The preferred choice shows a steeper initial rise, indicating faster decision times when strong drift favors one option."
    elif scenario_type == "Biased":
        return "One curve consistently leads the other, demonstrating how starting point bias affects the speed of reaching different decision boundaries."
    else:
        return "Curves follow similar trajectories, indicating balanced decision processes with no systematic speed advantages for either choice."

def _interpret_speed_accuracy(scenario_type, boundary, drift, rt_std):
    """Interpret the speed-accuracy trade-off pattern."""
    if scenario_type == "Impulsive/Low Stakes":
        return f"The pattern is erratic because decisions are made so quickly (low boundary={boundary:.1f}) that there's insufficient time for traditional speed-accuracy trade-offs to emerge."
    elif scenario_type == "Strong Preference":
        return f"Clear preference (drift={drift:.1f}) means 'accuracy' is high across all speed levels, as the strong signal makes both fast and slow decisions converge on the same choice."
    elif scenario_type == "Cautious/High Stakes":
        return f"High boundary separation (={boundary:.1f}) creates a strong speed-accuracy trade-off - slower decisions show higher consistency as more evidence is required."
    elif scenario_type == "Indecisive/Ambiguous":
        return f"Random fluctuations dominate due to zero drift, so the relationship between speed and consistency is weak and primarily driven by noise."
    else:
        return "Moderate speed-accuracy trade-off reflects balanced decision-making where additional deliberation time slightly improves choice consistency."

def _interpret_choice_proportion_rt(drift, bias, scenario_type):
    """Interpret choice proportion changes across reaction times."""
    if scenario_type == "Strong Preference":
        return f"Choice proportions remain stable across RT bins because the strong drift (={drift:.1f}) dominates regardless of decision speed."
    elif scenario_type == "Biased":
        return f"Faster decisions show stronger bias effects, while slower decisions allow more evidence accumulation to potentially override the initial bias (bias={bias:.2f})."
    elif scenario_type == "Indecisive/Ambiguous":
        return "Choice proportions fluctuate randomly around 50% across all RT bins, confirming that decisions are driven by noise rather than systematic preferences."
    elif scenario_type == "Impulsive/Low Stakes":
        return "Proportions vary across RT bins but patterns are less stable due to the rapid decision-making process limiting evidence accumulation."
    else:
        return "Choice proportions show moderate stability across RT bins, indicating that preference strength is consistent regardless of decision speed."

def _interpret_percentiles(coffee_rts, tea_rts, scenario_type):
    """Interpret the percentile comparison patterns."""
    if len(coffee_rts) == 0 or len(tea_rts) == 0:
        return "Cannot compare percentiles as one choice was never made."
    
    if scenario_type == "Impulsive/Low Stakes":
        return "All percentiles are clustered in a narrow range (typically 0.3-0.5s), showing that the low boundary creates consistently fast decisions for both choices."
    elif scenario_type == "Cautious/High Stakes":
        return "Percentiles span a wide range with high values across all levels, reflecting the extended deliberation time required by high decision thresholds."
    elif scenario_type == "Biased":
        coffee_90th = np.percentile(coffee_rts, 90) if len(coffee_rts) > 10 else 0
        tea_90th = np.percentile(tea_rts, 90) if len(tea_rts) > 10 else 0
        if abs(coffee_90th - tea_90th) > 0.1:
            return "Percentiles differ systematically between choices, showing how bias affects the entire RT distribution, not just the mean."
        else:
            return "Despite bias in choice frequency, RT percentiles are similar, indicating bias affects choice probability more than speed."
    else:
        return "Percentiles are nearly identical between choices, confirming that both decisions follow similar speed profiles in this balanced scenario."

def _interpret_timeline(rt_std, scenario_type, timeout_prop):
    """Interpret the decision timeline pattern."""
    if scenario_type == "Impulsive/Low Stakes":
        return f"Points cluster tightly around 0.3-0.4s with little variation across trials, showing consistent fast decision-making with no learning or fatigue effects."
    elif scenario_type == "Cautious/High Stakes":
        if timeout_prop > 0.5:
            return f"Wide scatter with some timeouts visible, reflecting the challenging nature of high-threshold decisions and occasional failures to reach criterion."
        else:
            return f"Points show greater vertical spread (RT variability={rt_std:.3f}), reflecting the variable time needed for evidence accumulation in high-stakes decisions."
    elif scenario_type == "Indecisive/Ambiguous":
        return f"Random scatter of colored points around the median, with no clear patterns, confirming that decisions are driven by noise rather than systematic processes."
    else:
        return f"Stable pattern across trials with moderate variability (SD={rt_std:.3f}), indicating consistent decision processes without adaptation or fatigue effects."

def _summarize_speed_pattern(median_rt, boundary):
    """Summarize the overall speed characteristics."""
    if median_rt < 0.4:
        return f"Very fast decisions (median {median_rt:.2f}s) due to low decision threshold"
    elif median_rt > 1.0:
        return f"Slow, deliberate decisions (median {median_rt:.2f}s) reflecting high caution threshold"
    else:
        return f"Moderate speed decisions (median {median_rt:.2f}s) showing balanced evidence accumulation"

def _generate_key_insight(scenario_type, drift, boundary, bias, noise):
    """Generate the key behavioral insight for the scenario."""
    insights = {
        "Strong Preference": f"High drift rate ({drift:.1f}) creates consistent, confident decisions with minimal impact from other parameters.",
        "Impulsive/Low Stakes": f"Low boundary ({boundary:.1f}) prioritizes speed over accuracy, creating rapid but potentially error-prone decisions.",
        "Cautious/High Stakes": f"High boundary ({boundary:.1f}) ensures thorough evidence evaluation at the cost of decision speed.",
        "Biased": f"Starting point bias ({bias:.2f}) systematically favors one option while maintaining normal decision processes.",
        "Indecisive/Ambiguous": f"Zero drift with high noise ({noise:.1f}) creates purely noise-driven decisions approximating random choice.",
        "Standard": "Balanced parameters create typical decision-making patterns with moderate speed and reasonable accuracy."
    }
    return insights.get(scenario_type, "Complex parameter interactions create unique decision-making patterns.")

# Add interpretation display to the main execution

def create_scenario_explanation(scenario_name, params, psychological_context, parameter_effects):
    """Create rich HTML explanation for each scenario."""
    
    html_content = f"""
    <div style="background: #f8f9fa; border: 1px solid #dee2e6; 
                border-radius: 5px; padding: 15px; margin: 10px 0;">
        
        <h2 style="color: #495057; margin-bottom: 10px; border-bottom: 2px solid #6c757d; padding-bottom: 5px;">
            {scenario_name}
        </h2>
        
        <div style="background: #ffffff; border: 1px solid #e9ecef; border-radius: 3px; padding: 10px; margin: 8px 0;">
            <h3 style="color: #dc3545; margin-bottom: 8px;">Psychological Context</h3>
            <p style="font-size: 14px; line-height: 1.5; color: #495057;">{psychological_context}</p>
        </div>
        
        <div style="background: #ffffff; border: 1px solid #e9ecef; border-radius: 3px; padding: 10px; margin: 8px 0;">
            <h3 style="color: #28a745; margin-bottom: 8px;">Parameter Configuration</h3>
            <ul style="font-size: 14px; color: #495057;">
                <li><strong>Drift Rate:</strong> {params['drift_rate']:.2f} - {parameter_effects['drift_rate']}</li>
                <li><strong>Boundary Separation:</strong> {params['boundary_separation']:.2f} - {parameter_effects['boundary_separation']}</li>
                <li><strong>Starting Point:</strong> {params.get('starting_point', params['boundary_separation']/2):.2f} - {parameter_effects['starting_point']}</li>
                <li><strong>Non-Decision Time:</strong> {params['non_decision_time']:.2f}s - {parameter_effects['non_decision_time']}</li>
                <li><strong>Noise SD:</strong> {params.get('noise_sd', 1.0):.2f} - {parameter_effects['noise_sd']}</li>
            </ul>
        </div>
        
        <div style="background: #ffffff; border: 1px solid #e9ecef; border-radius: 3px; padding: 10px; margin: 8px 0;">
            <h3 style="color: #6f42c1; margin-bottom: 8px;">Expected Behavioral Outcomes</h3>
            <p style="font-size: 14px; line-height: 1.5; color: #495057;">
                This parameter combination should produce decision patterns characterized by 
                {_predict_behavior(params)}.
            </p>
        </div>
    </div>
    """
    
    return html_content

def _predict_behavior(params):
    """Predict behavioral outcomes based on parameters."""
    drift = params['drift_rate']
    boundary = params['boundary_separation']
    bias = params.get('starting_point', boundary/2) / boundary
    
    predictions = []
    
    if abs(drift) > 0.5:
        predictions.append("strong preference for one option")
    elif abs(drift) < 0.1:
        predictions.append("high uncertainty and potential timeouts")
    
    if boundary > 1.5:
        predictions.append("slower but more deliberate decisions")
    elif boundary < 0.7:
        predictions.append("rapid, potentially impulsive choices")
    
    if abs(bias - 0.5) > 0.1:
        predictions.append("systematic bias toward one alternative")
    
    return ", ".join(predictions) if predictions else "balanced decision-making behavior"

# ===================================================================================
# ==== ENHANCED DATA ANALYSIS FUNCTIONS ====
# ===================================================================================

def create_comprehensive_dataframes(all_scenarios_data):
    """Create comprehensive DataFrames comparing all scenarios."""
    
    # 1. Scenario Summary DataFrame
    scenario_summary = []
    for scenario_name, data in all_scenarios_data.items():
        decisions, rts, params = data['decisions'], data['reaction_times'], data['params']
        
        valid_mask = decisions != 0
        coffee_mask = decisions == 1
        tea_mask = decisions == -1
        timeout_mask = decisions == 0
        
        summary = {
            'Scenario': scenario_name,
            'Drift_Rate': params['drift_rate'],
            'Boundary_Separation': params['boundary_separation'],
            'Starting_Point': params.get('starting_point', params['boundary_separation']/2),
            'Non_Decision_Time': params['non_decision_time'],
            'Noise_SD': params.get('noise_sd', 1.0),
            'N_Trials': len(decisions),
            'Coffee_Choices': np.sum(coffee_mask),
            'Tea_Choices': np.sum(tea_mask),
            'Timeouts': np.sum(timeout_mask),
            'Coffee_Percentage': np.mean(coffee_mask) * 100,
            'Tea_Percentage': np.mean(tea_mask) * 100,
            'Timeout_Percentage': np.mean(timeout_mask) * 100,
            'Mean_RT_Coffee': np.mean(rts[coffee_mask]) if np.any(coffee_mask) else np.nan,
            'Mean_RT_Tea': np.mean(rts[tea_mask]) if np.any(tea_mask) else np.nan,
            'Std_RT_Coffee': np.std(rts[coffee_mask]) if np.any(coffee_mask) else np.nan,
            'Std_RT_Tea': np.std(rts[tea_mask]) if np.any(tea_mask) else np.nan,
            'Median_RT_Overall': np.median(rts[valid_mask]) if np.any(valid_mask) else np.nan,
            'IQR_RT_Overall': np.percentile(rts[valid_mask], 75) - np.percentile(rts[valid_mask], 25) if np.any(valid_mask) else np.nan
        }
        scenario_summary.append(summary)
    
    scenario_df = pd.DataFrame(scenario_summary)
    
    # 2. Detailed Trial-by-Trial DataFrame (sample from all scenarios)
    detailed_trials = []
    for scenario_name, data in all_scenarios_data.items():
        decisions, rts = data['decisions'], data['reaction_times']
        params = data['params']
        
        # Sample first 100 trials from each scenario
        n_sample = min(100, len(decisions))
        for i in range(n_sample):
            trial_data = {
                'Scenario': scenario_name,
                'Trial_ID': i + 1,
                'Decision_Raw': decisions[i],
                'Decision_Label': 'Coffee' if decisions[i] == 1 else ('Tea' if decisions[i] == -1 else 'Timeout'),
                'Reaction_Time': rts[i],
                'Drift_Rate': params['drift_rate'],
                'Boundary_Separation': params['boundary_separation'],
                'Bias_Toward_Coffee': (params.get('starting_point', params['boundary_separation']/2) / params['boundary_separation']) > 0.5
            }
            detailed_trials.append(trial_data)
    
    detailed_df = pd.DataFrame(detailed_trials)
    
    # 3. Statistical Comparison DataFrame
    stat_comparisons = []
    scenario_names = list(all_scenarios_data.keys())
    
    for i, scenario1 in enumerate(scenario_names):
        for j, scenario2 in enumerate(scenario_names[i+1:], i+1):
            data1 = all_scenarios_data[scenario1]
            data2 = all_scenarios_data[scenario2]
            
            # Extract valid RTs
            rt1 = data1['reaction_times'][data1['decisions'] != 0]
            rt2 = data2['reaction_times'][data2['decisions'] != 0]
            
            # Extract choice proportions
            prop1 = np.mean(data1['decisions'] == 1)
            prop2 = np.mean(data2['decisions'] == 1)
            
            # Statistical tests
            if len(rt1) > 10 and len(rt2) > 10:
                t_stat, t_pval = ttest_ind(rt1, rt2)
                ks_stat, ks_pval = ks_2samp(rt1, rt2)
            else:
                t_stat = t_pval = ks_stat = ks_pval = np.nan
            
            comparison = {
                'Scenario_1': scenario1,
                'Scenario_2': scenario2,
                'Mean_RT_Diff': np.mean(rt1) - np.mean(rt2) if len(rt1) > 0 and len(rt2) > 0 else np.nan,
                'Choice_Prop_Diff': prop1 - prop2,
                'T_Test_Statistic': t_stat,
                'T_Test_P_Value': t_pval,
                'KS_Test_Statistic': ks_stat,
                'KS_Test_P_Value': ks_pval,
                'Significant_RT_Diff': t_pval < 0.05 if not np.isnan(t_pval) else False,
                'Significant_Dist_Diff': ks_pval < 0.05 if not np.isnan(ks_pval) else False
            }
            stat_comparisons.append(comparison)
    
    stats_df = pd.DataFrame(stat_comparisons)
    
    return scenario_df, detailed_df, stats_df

def create_and_display_enhanced_summaries(decisions, reaction_times, scenario_name):
    """Enhanced version of the original summary function with additional analyses."""
    df = pd.DataFrame({'reaction_time': reaction_times, 'decision_raw': decisions})
    df['decision_label'] = df['decision_raw'].map({1: 'Buy Coffee', -1: 'Buy Tea', 0: 'Timeout'})

    display(HTML(f"<h3 style='color: #495057; background: #f8f9fa; padding: 8px; border-radius: 3px;'>Choice Summary - {scenario_name}</h3>"))
    summary_choice_df = pd.DataFrame({
        'Count': df['decision_label'].value_counts(),
        'Percentage (%)': df['decision_label'].value_counts(normalize=True).mul(100)
    }).rename_axis('Decision')
    display(summary_choice_df)

    display(HTML("<h4 style='color: #28a745;'>Reaction Time Summary (in seconds)</h4>"))
    rt_summary_df = df[df['decision_raw'] != 0].groupby('decision_label')['reaction_time'].agg(['count', 'mean', 'std', 'min', lambda x: x.quantile(0.25), 'median', lambda x: x.quantile(0.75), 'max']).rename_axis('Decision')
    rt_summary_df.columns = ['count', 'mean', 'std', 'min', '25%', 'median', '75%', 'max']
    if not rt_summary_df.empty: 
        display(rt_summary_df)
    else: 
        display(HTML("<p>No decisions were made before timeout.</p>"))
    
    # Additional detailed analysis
    display(HTML("<h4 style='color: #dc3545;'>Extended Statistical Analysis</h4>"))
    
    if len(df[df['decision_raw'] != 0]) > 0:
        # Percentile analysis
        percentile_df = df[df['decision_raw'] != 0].groupby('decision_label')['reaction_time'].agg([
            lambda x: np.percentile(x, 10),
            lambda x: np.percentile(x, 90), 
            lambda x: np.percentile(x, 75) - np.percentile(x, 25),
            lambda x: np.std(x) / np.mean(x) if np.mean(x) != 0 else 0
        ]).round(3)
        percentile_df.columns = ['10th_percentile', '90th_percentile', 'IQR', 'coefficient_of_variation']
        
        display(HTML("<h5>Detailed Percentile Analysis</h5>"))
        display(percentile_df)
        
        # Speed bins analysis
        rt_valid = df[df['decision_raw'] != 0]['reaction_time']
        if len(rt_valid) > 10:
            bins = pd.qcut(rt_valid, q=3, labels=['Fast', 'Medium', 'Slow'])
            speed_analysis = pd.crosstab(df[df['decision_raw'] != 0]['decision_label'], bins, normalize='columns') * 100
            
            display(HTML("<h5>Decision Patterns by Speed</h5>"))
            display(speed_analysis.round(1))

# ===================================================================================
# ==== MAIN ENHANCED EXECUTION BLOCK ====
# ===================================================================================

if __name__ == "__main__":
    DECISION_A_LABEL = "Buy Coffee"
    DECISION_B_LABEL = "Buy Tea"

    SHARED_PARAMS = {'n_trials': 2000, 'non_decision_time': 0.25, 'max_time': 4.0, 'dt': 0.001}

    # Store all scenario data for cross-scenario analysis
    all_scenarios_data = {}
    
    # Enhanced scenario definitions with detailed explanations
    scenarios = {
        "Scenario 1: The Standard Shopper": {
            'params': {**SHARED_PARAMS, 'drift_rate': 0.3, 'boundary_separation': 1.0, 'starting_point': 0.5, 'noise_sd': 1.0},
            'psychological_context': "Represents a typical consumer making a routine purchase decision. They have a slight preference but remain open to either option. This is the baseline scenario against which others are compared.",
            'parameter_effects': {
                'drift_rate': "Moderate positive drift indicates slight preference for coffee",
                'boundary_separation': "Standard threshold reflects normal decision caution",
                'starting_point': "Neutral starting point shows no initial bias",
                'non_decision_time': "Normal perceptual/motor processing time",
                'noise_sd': "Standard decision noise level"
            }
        },
        
        "Scenario 2: The Coffee Lover (Biased)": {
            'params': {**SHARED_PARAMS, 'drift_rate': 0.3, 'boundary_separation': 1.0, 'starting_point': 0.75, 'noise_sd': 1.0},
            'psychological_context': "Someone with a strong prior preference for coffee, perhaps due to habit, taste preference, or caffeine dependency. They start closer to the coffee decision boundary.",
            'parameter_effects': {
                'drift_rate': "Same information quality as standard shopper",
                'boundary_separation': "Same decision caution as standard shopper", 
                'starting_point': "Biased toward coffee choice boundary",
                'non_decision_time': "Same processing time",
                'noise_sd': "Same noise level"
            }
        },
        
        "Scenario 3: The Cautious Buyer (High Stakes)": {
            'params': {**SHARED_PARAMS, 'drift_rate': 0.3, 'boundary_separation': 2.0, 'starting_point': 1.0, 'noise_sd': 1.0},
            'psychological_context': "A careful decision-maker who wants to be very sure before committing. This could represent expensive purchases, health-conscious consumers, or someone with decision anxiety.",
            'parameter_effects': {
                'drift_rate': "Same information processing rate",
                'boundary_separation': "High threshold requires more evidence",
                'starting_point': "Neutral relative to expanded boundaries",
                'non_decision_time': "Same processing time",
                'noise_sd': "Same noise level"
            }
        },
        
        "Scenario 4: The Impulsive Buyer (Low Stakes)": {
            'params': {**SHARED_PARAMS, 'drift_rate': 0.3, 'boundary_separation': 0.6, 'starting_point': 0.3, 'noise_sd': 1.0},
            'psychological_context': "Quick decision-maker who doesn't deliberate much. Could represent low-cost purchases, time pressure, or personality-driven impulsivity.",
            'parameter_effects': {
                'drift_rate': "Same information quality",
                'boundary_separation': "Low threshold enables quick decisions",
                'starting_point': "Neutral relative to compressed boundaries",
                'non_decision_time': "Same processing time",
                'noise_sd': "Same noise level"
            }
        },
        
        "Scenario 5: The Indecisive Shopper (Ambiguous)": {
            'params': {**SHARED_PARAMS, 'drift_rate': 0.0, 'boundary_separation': 1.0, 'starting_point': 0.5, 'noise_sd': 1.0},
            'psychological_context': "Faces genuinely ambiguous options with no clear preference. Both choices seem equally attractive, leading to decision difficulty and potential timeouts.",
            'parameter_effects': {
                'drift_rate': "Zero drift - no systematic preference",
                'boundary_separation': "Standard decision threshold",
                'starting_point': "Perfectly neutral starting point",
                'non_decision_time': "Same processing time",
                'noise_sd': "Same noise level - decision driven by random fluctuations"
            }
        },
        
        "Scenario 6: The Discount Hunter (Promotion on Coffee)": {
            'params': {**SHARED_PARAMS, 'drift_rate': 0.8, 'boundary_separation': 1.0, 'starting_point': 0.5, 'noise_sd': 1.0},
            'psychological_context': "Strong external incentive (discount/promotion) creates clear preference. Represents situation where economic factors override personal preferences.",
            'parameter_effects': {
                'drift_rate': "High positive drift due to promotional advantage",
                'boundary_separation': "Standard decision threshold",
                'starting_point': "Neutral starting point despite strong drift",
                'non_decision_time': "Same processing time",
                'noise_sd': "Same noise level"
            }
        },
        
        "Scenario 7: The High-Pressure Sale (Limited-Time Offer)": {
            'params': {**SHARED_PARAMS, 'drift_rate': 0.3, 'boundary_separation': 0.5, 'starting_point': 0.25, 'noise_sd': 1.0},
            'psychological_context': "Time pressure or sales tactics force quick decisions with reduced deliberation. The urgency lowers decision thresholds.",
            'parameter_effects': {
                'drift_rate': "Standard preference strength",
                'boundary_separation': "Very low threshold due to time pressure",
                'starting_point': "Neutral relative to compressed boundaries",
                'non_decision_time': "Same processing time",
                'noise_sd': "Same noise level"
            }
        },
        
        "Scenario 8: Analysis Paralysis (Conflicting Information)": {
            'params': {**SHARED_PARAMS, 'drift_rate': 0.05, 'boundary_separation': 1.5, 'starting_point': 0.75, 'noise_sd': 2.5},
            'psychological_context': "Overwhelmed by conflicting information and multiple factors to consider. High noise represents internal conflict and uncertainty about decision criteria.",
            'parameter_effects': {
                'drift_rate': "Very weak preference due to conflicting signals",
                'boundary_separation': "Elevated threshold seeking more certainty",
                'starting_point': "Slight bias toward coffee despite confusion",
                'non_decision_time': "Same processing time",
                'noise_sd': "High noise represents internal conflict and uncertainty"
            }
        },
        
        # NEW: Dynamic Boundary Scenario
        "Scenario 9: The Queue Effect (Dynamic Boundaries)": {
            'params': {**SHARED_PARAMS, 'drift_rate': 0.3, 'boundary_separation': 1.0, 'starting_point': 0.5, 'noise_sd': 1.0, 'queue_pressure': 1.2, 'pressure_onset': 0.5},
            'psychological_context': "Represents decision-making when external pressure increases over time (e.g., long queue, time constraints). Both decision boundaries become easier to reach as impatience grows - the coffee choice requires less evidence (upper boundary moves down) and the tea choice also becomes easier (lower boundary moves up), modeling the psychological pressure to make ANY decision quickly.",
            'parameter_effects': {
                'drift_rate': "Standard preference strength",
                'boundary_separation': "Initial threshold before pressure effects",
                'starting_point': "Neutral starting point",
                'non_decision_time': "Same processing time",
                'noise_sd': "Same noise level",
                'queue_pressure': "Dynamic boundary movement strength",
                'pressure_onset': "When queue pressure effects begin"
            }
        }
    }
    
    # Run enhanced scenarios (including dynamic boundary scenario)
    for scenario_name, scenario_info in scenarios.items():
        params = scenario_info['params']
        
        # Display rich explanation
        explanation_html = create_scenario_explanation(
            scenario_name, 
            params, 
            scenario_info['psychological_context'], 
            scenario_info['parameter_effects']
        )
        display(HTML(explanation_html))
        
        # Check if this is the dynamic boundary scenario
        if 'queue_pressure' in params:
            # Run dynamic simulation
            decisions, rts, traces, time_points, upper_traces, lower_traces = run_ddm_simulation_dynamic(**params)
            
            # Store data for cross-scenario analysis (modified structure)
            all_scenarios_data[scenario_name] = {
                'decisions': decisions,
                'reaction_times': rts,
                'traces': traces,
                'time_points': time_points,
                'params': params,
                'upper_traces': upper_traces,
                'lower_traces': lower_traces
            }
            
            # Enhanced summaries
            create_and_display_enhanced_summaries(decisions, rts, scenario_name)
            
            # Dynamic boundary visualization
            fig1 = plot_dynamic_ddm_visualization(decisions, rts, traces, time_points, upper_traces, lower_traces,
                                                 **params, title_override=scenario_name, 
                                                 decision_a_label=DECISION_A_LABEL, 
                                                 decision_b_label=DECISION_B_LABEL)
            plt.show()
            
        else:
            # Run standard simulation
            decisions, rts, traces, time_points = run_ddm_simulation(**params)
            
            # Store data for cross-scenario analysis
            all_scenarios_data[scenario_name] = {
                'decisions': decisions,
                'reaction_times': rts,
                'traces': traces,
                'time_points': time_points,
                'params': params
            }
            
            # Enhanced summaries
            create_and_display_enhanced_summaries(decisions, rts, scenario_name)
            
            # Original visualization (preserved)
            fig1 = plot_enhanced_ddm_visualization(decisions, rts, traces, time_points, **params, 
                                                  title_override=scenario_name, 
                                                  decision_a_label=DECISION_A_LABEL, 
                                                  decision_b_label=DECISION_B_LABEL)
            plt.show()
        
        # Additional analytical plots (for all scenarios)
        fig2 = plot_additional_analyses(decisions, rts, scenario_name, DECISION_A_LABEL, DECISION_B_LABEL)
        plt.show()
        
        display(HTML("<hr style='border: 1px solid #dee2e6; margin: 20px 0;'>"))
    
    # ===================================================================================
    # ==== CROSS-SCENARIO COMPARATIVE ANALYSIS ====
    # ===================================================================================
    
    display(HTML("""
    <div style="background: #495057; color: white; padding: 15px; 
                border-radius: 5px; margin: 15px 0; text-align: center;">
        <h1 style="margin: 0; font-size: 24px;">COMPREHENSIVE CROSS-SCENARIO ANALYSIS</h1>
        <p style="margin: 8px 0 0 0; font-size: 14px;">Comparing behavioral patterns across all decision-making scenarios</p>
    </div>
    """))
    
    # Generate comprehensive DataFrames
    scenario_df, detailed_df, stats_df = create_comprehensive_dataframes(all_scenarios_data)
    
    display(HTML("<h2 style='color: #495057;'>Complete Scenario Comparison</h2>"))
    display(scenario_df)
    
    display(HTML("<h2 style='color: #495057;'>Statistical Comparisons Between Scenarios</h2>"))
    display(stats_df)
    
    display(HTML("<h2 style='color: #495057;'>Sample Trial-by-Trial Data</h2>"))
    display(detailed_df.head(20))
    
    # Create final comparative visualizations with proper scenario names
    fig, axes = plt.subplots(2, 2, figsize=(20, 16))
    fig.suptitle('Cross-Scenario Behavioral Comparison', fontsize=20, fontweight='bold')
    
    # 1. Choice proportions heatmap
    ax = axes[0, 0]
    choice_matrix = scenario_df[['Coffee_Percentage', 'Tea_Percentage', 'Timeout_Percentage']].values
    im = ax.imshow(choice_matrix, cmap='RdYlBu', aspect='auto')
    ax.set_xticks(range(3))
    ax.set_xticklabels(['Coffee %', 'Tea %', 'Timeout %'])
    ax.set_yticks(range(len(scenarios)))
    
    # Use better scenario names (first 20 characters)
    scenario_labels = [name[:20] + "..." if len(name) > 20 else name for name in scenarios.keys()]
    ax.set_yticklabels(scenario_labels, fontsize=9)
    ax.set_title('Choice Patterns Across Scenarios')
    
    # Add text annotations
    for i in range(len(scenarios)):
        for j in range(3):
            text = ax.text(j, i, f'{choice_matrix[i, j]:.1f}%', 
                          ha="center", va="center", color="black", fontweight='bold')
    
    plt.colorbar(im, ax=ax)
    
    # 2. Mean reaction times comparison
    ax = axes[0, 1]
    coffee_rts = scenario_df['Mean_RT_Coffee'].fillna(0)
    tea_rts = scenario_df['Mean_RT_Tea'].fillna(0)
    
    x = np.arange(len(scenario_labels))
    width = 0.35
    
    ax.bar(x - width/2, coffee_rts, width, label='Coffee', alpha=0.7, color='brown')
    ax.bar(x + width/2, tea_rts, width, label='Tea', alpha=0.7, color='green')
    
    ax.set_xlabel('Scenario')
    ax.set_ylabel('Mean Reaction Time (s)')
    ax.set_title('Mean Reaction Times by Choice')
    ax.set_xticks(x)
    ax.set_xticklabels([f'S{i+1}' for i in range(len(scenario_labels))], rotation=0)
    ax.legend()
    ax.grid(True, alpha=0.3)
    
    # 3. Parameter effects visualization
    ax = axes[1, 0]
    drift_rates = scenario_df['Drift_Rate']
    boundaries = scenario_df['Boundary_Separation']
    coffee_percs = scenario_df['Coffee_Percentage']
    
    scatter = ax.scatter(drift_rates, boundaries, c=coffee_percs, s=800, 
                        cmap='RdYlBu_r', alpha=0.7, edgecolors='black')
    ax.set_xlabel('Drift Rate')
    ax.set_ylabel('Boundary Separation')
    ax.set_title('Parameter Space and Choice Outcomes')
    
    # Add scenario labels with full scenario numbers
    for i in range(len(scenario_labels)):
        ax.annotate(f'S{i+1}', (drift_rates.iloc[i], boundaries.iloc[i]), 
                   fontsize=24, ha='center', va='center', fontweight='bold', color='red')
    
    plt.colorbar(scatter, ax=ax, label='Coffee Choice %')
    
    # 4. Speed-accuracy relationship across scenarios
    ax = axes[1, 1]
    mean_rts = scenario_df[['Mean_RT_Coffee', 'Mean_RT_Tea']].mean(axis=1)
    choice_consistency = np.abs(scenario_df['Coffee_Percentage'] - 50)  # Deviation from 50-50
    
    ax.scatter(mean_rts, choice_consistency, s=800, alpha=0.7, 
              c=range(len(scenarios)), cmap='viridis', edgecolors='red')
    
    for i in range(len(scenario_labels)):
        ax.annotate(f'S{i+1}', (mean_rts.iloc[i], choice_consistency.iloc[i]), 
                   fontsize=24, ha='center', va='center', fontweight='bold', color='red')
    
    ax.set_xlabel('Mean Reaction Time (s)')
    ax.set_ylabel('Choice Consistency (|deviation from 50%|)')
    ax.set_title('Speed vs. Choice Consistency')
    ax.grid(True, alpha=0.3)
    
    plt.tight_layout()
    plt.show()
    
    # Create legend for scenario mapping
    display(HTML("""
    <div style="background: #f8f9fa; border: 1px solid #dee2e6; border-radius: 5px; padding: 15px; margin: 15px 0;">
        <h3 style="color: #495057; margin-bottom: 10px;">Scenario Reference Guide</h3>
        <ul style="font-size: 14px; color: #495057; columns: 2; column-gap: 20px;">
    """ + "".join([f"<li><strong>S{i+1}:</strong> {name}</li>" for i, name in enumerate(scenarios.keys())]) + """
        </ul>
    </div>
    """))