LQL‑Equiv is a free & open‑source software (GNU‑based) written in MATLAB and distributed as a standalone executable, developed by Cyril Voyant & Daniel Julian. It computes voxel‑wise Equivalent Dose in 2 Gy fractions (EQD₂) and Biologically Effective Dose (BED) using a Linear‑Quadratic‑Linear (LQL) model that explicitly accounts for fraction size, overall treatment time, and cellular repopulation effects, outperforming standard LQ-based calculators.
Theoretical Basis: Integrates the Astrahan LQL framework for high‑dose per fraction regimens (> dₜ), Dale’s repopulation corrections, and Thames’s multi‑fractionation modeling, implemented in an algorithm that minimizes a custom cost function to compute accurate EQD₂ and BED across complex radiotherapy scenarios.
Clinical Relevance: Validation studies report dose discrepancies up to ~25 % when compared to conventional LQ-based models, particularly relevant in hypo‑ and hyper‑fractionated protocols and in presence of treatment interruptions — a difference largely driven by tumor repopulation dynamics in prostate cancer cases.
Interface & Deployment: LQL‑Equiv is distributed as a Matlab® standalone GUI application, requiring MATLAB Runtime on Windows (no full MATLAB license needed). The interface offers few but essential adjustable parameters (e.g. α/β ratio, kick-off time Tₖ, potential doubling time Tₚₒₜ), ensuring usability and focus on reproducibility.
Regulatory Scope: LQL‑Equiv is intended for research use and secondary validation only, not as a clinically certified tool. Users must verify outputs and remain responsible for clinical interpretation; the developers disclaim liability for misuse.
As a Resume:
Validated performance: deviations typically < 25 % compared to standard computations.
Fully open‑source available with GUI and adjustable biological parameters.
Already cited in Google Scholar, documented on ResearchGate, and archived on Zenodo.
Designed for medical physicists and clinical researchers in radiotherapy to support accurate and personalized treatment evaluation.
Resources:
Forecasting future solar power plant production is essential to continue the development of photovoltaic energy and increase its share in the energy mix for a more sustainable future. Accurate solar radiation forecasting greatly improves the balance maintenance between energy supply and demand and grid management performance. This study assesses the influence of input selection on short-term global horizontal irradiance (GHI) forecasting across two contrasting Algerian climates: arid Ghardaïa and coastal Algiers. Eight feature selection methods (Pearson, Spearman, Mutual Information (MI), LASSO, SHAP (GB and RF), and RFE (GB and RF)) are evaluated using a Gradient Boosting model over horizons from one to six hours ahead. Input relevance depends on both the location and forecast horizon. At t+1, MI achieves the best results in Ghardaïa (nMAE = 6.44%), while LASSO performs best in Algiers (nMAE = 10.82%). At t+6, SHAP- and RFE-based methods yield the lowest errors in Ghardaïa (nMAE = 17.17%), and RFE-GB leads in Algiers (nMAE = 28.13%). Although performance gaps between methods remain moderate, relative improvements reach up to 30.28% in Ghardaïa and 12.86% in Algiers. These findings confirm that feature selection significantly enhances accuracy (especially at extended horizons) and suggest that simpler methods such as MI or LASSO can remain effective, depending on the climate context and forecast horizon.
Clearsky models are widely used in solar energy for many applications such as quality control, resource assessment, satellite-base irradiance estimation and forecasting. However, their use in forecasting and nowcasting is associated with a number of challenges. Synchronization errors, reliance on the Clearsky index (ratio of the global horizontal irradiance to its cloud-free counterpart) and high sensitivity of the clearsky model to errors in aerosol optical depth at low solar elevation limit their added value in real-time applications. This paper explores the feasibility of short-term forecasting without relying on a clearsky model. We propose a Clearsky-Free forecasting approach using Extreme Learning Machine (ELM) models. ELM learns daily periodicity and local variability directly from raw Global Horizontal Irradiance (GHI) data. It eliminates the need for Clearsky normalization, simplifying the forecasting process and improving scalability. Our approach is a non-linear adaptative statistical method that implicitly learns the irradiance in cloud-free conditions removing the need for an clear-sky model and the related operational issues. Deterministic and probabilistic results are compared to traditional benchmarks, including ARMA with McClear-generated Clearsky data and quantile regression for probabilistic forecasts. ELM matches or outperforms these methods, providing accurate predictions and robust uncertainty quantification. This approach offers a simple, efficient solution for real-time solar forecasting. By overcoming the stationarization process limitations based on usual multiplicative scheme Clearsky models, it provides a flexible and reliable framework for modern energy systems.
Clearsky models are widely used in solar energy for many applications such as quality control, resource assessment, satellite-base irradiance estimation and forecasting. However, their use in forecasting and nowcasting is associated with a number of challenges. Synchronization errors, reliance on the Clearsky index (ratio of the global horizontal irradiance to its cloud-free counterpart) and high sensitivity of the clearsky model to errors in aerosol optical depth at low solar elevation limit their added value in real-time applications. This paper explores the feasibility of short-term forecasting without relying on a clearsky model. We propose a Clearsky-Free forecasting approach using Extreme Learning Machine (ELM) models. ELM learns daily periodicity and local variability directly from raw Global Horizontal Irradiance (GHI) data. It eliminates the need for Clearsky normalization, simplifying the forecasting process and improving scalability. Our approach is a non-linear adaptative statistical method that implicitly learns the irradiance in cloud-free conditions removing the need for an clear-sky model and the related operational issues. Deterministic and probabilistic results are compared to traditional benchmarks, including ARMA with McClear-generated Clearsky data and quantile regression for probabilistic forecasts. ELM matches or outperforms these methods, providing accurate predictions and robust uncertainty quantification. This approach offers a simple, efficient solution for real-time solar forecasting. By overcoming the stationarization process limitations based on usual multiplicative scheme Clearsky models, it provides a flexible and reliable framework for modern energy systems.
This work presents a robust framework for quantifying solar irradiance variability and forecastability through the Stochastic Coefficient of Variation (sCV) and the Forecastability (F). Traditional metrics, such as the standard deviation, fail to isolate stochastic fluctuations from deterministic trends in solar irradiance. By considering clear-sky irradiance as a dynamic upper bound of measurement, sCV provides a normalized, dimensionless measure of variability that theoretically ranges from 0 to 1. F extends sCV by integrating temporal dependencies via maximum autocorrelation, thus linking sCV with F. The proposed methodology is validated using synthetic cyclostationary time series and experimental data from 68 meteorological stations (in Spain). Our comparative analyses demonstrate that sCV and F proficiently encapsulate multi-scale fluctuations, while addressing significant limitations inherent in traditional metrics. This comprehensive framework enables a refined quantification of solar forecast uncertainty, supporting improved decision-making in flexibility procurement and operational strategies. By assessing variability and forecastability across multiple time scales, it enhances real-time monitoring capabilities and informs adaptive energy management approaches, such as dynamic outage management and risk-adjusted capacity allocation
Accurate solar energy output prediction is fundamental to integrating renewable energy sources into electrical grids, maintaining system stability, and enabling effective energy management. However, conventional error metrics—such as Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and Skill Scores (SS)—fail to capture the multidimensional complexity of solar irradiance forecasting. These metrics lack sensitivity to forecastability, rely on arbitrary baselines (e.g., clear-sky models), and are poorly adapted to operational needs.
To address these limitations, this study introduces the NICE^k metrics (Normalized Informed Comparison of Errors, with k = 1, 2, 3, Σ), a novel evaluation framework offering a robust, interpretable, and multidimensional assessment of forecasting models. Each NICE^k score corresponds to a specific L^k norm: NICE^1 emphasizes average errors, NICE^2 highlights large deviations, NICE^3 focuses on outliers, and NICE^Σ combines all three dimensions.
The methodology combines synthetic Monte Carlo simulations with real-world data from the Spanish SIAR network, encompassing 68 meteorological stations in diverse climatic regions. Forecasting models evaluated include autoregressive approaches, Extreme Learning Machines, and smart persistence. Results show that theoretical and empirical NICE^k values converge only when strong statistical assumptions are met (e.g., R² ≈ 1.0 for NICE^2). Most importantly, the composite metric NICE^Σ consistently outperforms conventional metrics in discriminating between models (e.g., p-values < 0.05 for NICE^Σ vs > 0.05 for nRMSE or nMAE).
Across increasing forecast horizons, NICE^Σ yields consistently significant p-values (from 10⁻⁶ to 0.004), while nRMSE and nMAE often fail to reach statistical significance. Furthermore, traditional metrics (nRMSE, nMAE, nMBE, R²) cannot reliably distinguish between models in head-to-head comparisons. In contrast, the NICE^k family demonstrates superior statistical discrimination (p < 0.001), broader variance distributions, and better inter-study comparability.
This study confirms the theoretical and empirical validity of the NICE^k framework and highlights its operational relevance. It establishes NICE^k as a robust, unified, and interpretable alternative to conventional metrics for evaluating deterministic solar forecasting models.