Sanjeev Singh

    Publications:

    Serenko I. A., Dorn Y. V., Singh S. R., Kornaev A. V.
    Abstract
    This work addresses uncertainty quantification in machine learning, treating it as a hidden parameter of the model that estimates variance in training data, thereby enhancing the interpretability of predictive models. By predicting both the target value and the certainty of the prediction, combined with deep ensembling to study model uncertainty, the proposed method aims to increase model accuracy. The approach was applied to the well-known problem of Remaining Useful Life (RUL) estimation for turbofan jet engines using NASA’s dataset. The method demonstrated competitive results compared to other commonly used tabular data processing methods, including k-nearest neighbors, support vector machines, decision trees, and their ensembles. The proposed method is based on advanced techniques that leverage uncertainty quantification to improve the reliability and accuracy of RUL predictions.
    Keywords: machine learning, analysis of sequences, uncertainty quantification, recurrent neural networks, rotor machines, remaining useful life
    Citation: Serenko I. A., Dorn Y. V., Singh S. R., Kornaev A. V.,  Room for Uncertainty in Remaining Useful Life Estimation for Turbofan Jet Engines, Rus. J. Nonlin. Dyn., 2024, Vol. 20, no. 5, pp.  933-943
    DOI:10.20537/nd241218
    Gasnikov A. V., Alkousa M. S., Lobanov A. V., Dorn Y. V., Stonyakin F. S., Kuruzov I. A., Singh S. R.
    Abstract
    Frequently, when dealing with many machine learning models, optimization problems appear to be challenging due to a limited understanding of the constructions and characterizations of the objective functions in these problems. Therefore, major complications arise when dealing with first-order algorithms, in which gradient computations are challenging or even impossible in various scenarios. For this reason, we resort to derivative-free methods (zeroth-order methods). This paper is devoted to an approach to minimizing quasi-convex functions using a recently proposed (in [56]) comparison oracle only. This oracle compares function values at two points and tells which is larger, thus by the proposed approach, the comparisons are all we need to solve the optimization problem under consideration. The proposed algorithm to solve the considered problem is based on the technique of comparison-based gradient direction estimation and the comparison-based approximation normalized gradient descent. The normalized gradient descent algorithm is an adaptation of gradient descent, which updates according to the direction of the gradients, rather than the gradients themselves. We proved the convergence rate of the proposed algorithm when the objective function is smooth and strictly quasi-convex in $\mathbb{R}^n$, this algorithm needs $\mathcal{O}\left( \left(n D^2/\varepsilon^2 \right) \log\left(n D / \varepsilon\right)\right)$ comparison queries to find an $\varepsilon$-approximate of the optimal solution, where $D$ is an upper bound of the distance between all generated iteration points and an optimal solution.
    Keywords: quasi-convex function, gradient-free algorithm, smooth function, comparison oracle, normalized gradient descent
    Citation: Gasnikov A. V., Alkousa M. S., Lobanov A. V., Dorn Y. V., Stonyakin F. S., Kuruzov I. A., Singh S. R.,  On Quasi-Convex Smooth Optimization Problems by a Comparison Oracle, Rus. J. Nonlin. Dyn., 2024, Vol. 20, no. 5, pp.  813-825
    DOI:10.20537/nd241211

    Back to the list