Mohammad Alkousa

    ul. Universitetskaya 1, Innopolis, 420500 Russia
    Innopolis University

    Publications:

    Alkousa M. S., Stonyakin F. S., Abdo A. M., Alcheikh M. M.
    Abstract
    This paper is devoted to new mirror descent-type methods with switching between two types of iteration points (productive and non-productive) for constrained convex optimization problems with several convex functional (inequality-type) constraints. We propose two methods (standard one and its modification) with a new weighting scheme for points in each iteration of methods, which assigns smaller weights to the initial points and larger weights to the most recent points, thus as a result, it improves the convergence rate of the proposed methods (empirically). The proposed modification makes it possible to reduce the running time of the method due to skipping some of the functional constraints at non-productive steps. We derive bounds for the convergence rate of the proposed methods with time-varying step sizes, which show that the proposed methods are optimal from the viewpoint of lower oracle estimates. The results of some numerical experiments, which illustrate the advantages of the proposed methods for some examples, such as the best approximation problem, the Fermat –Torricelli – Steiner problem, the smallest covering ball problem, and the maximum of a finite collection of linear functions, are also presented.
    Keywords: convex optimization, non-smooth problem, problem with functional constraints, mirror descent, optimal convergence rate
    Citation: Alkousa M. S., Stonyakin F. S., Abdo A. M., Alcheikh M. M.,  Mirror Descent Methods with Weighting Scheme for Outputs for Optimization Problems with Functional Constraints, Rus. J. Nonlin. Dyn., 2024, Vol. 20, no. 5, pp.  727-745
    DOI:10.20537/nd241207
    Gasnikov A. V., Alkousa M. S., Lobanov A. V., Dorn Y. V., Stonyakin F. S., Kuruzov I. A., Singh S. R.
    Abstract
    Frequently, when dealing with many machine learning models, optimization problems appear to be challenging due to a limited understanding of the constructions and characterizations of the objective functions in these problems. Therefore, major complications arise when dealing with first-order algorithms, in which gradient computations are challenging or even impossible in various scenarios. For this reason, we resort to derivative-free methods (zeroth-order methods). This paper is devoted to an approach to minimizing quasi-convex functions using a recently proposed (in [56]) comparison oracle only. This oracle compares function values at two points and tells which is larger, thus by the proposed approach, the comparisons are all we need to solve the optimization problem under consideration. The proposed algorithm to solve the considered problem is based on the technique of comparison-based gradient direction estimation and the comparison-based approximation normalized gradient descent. The normalized gradient descent algorithm is an adaptation of gradient descent, which updates according to the direction of the gradients, rather than the gradients themselves. We proved the convergence rate of the proposed algorithm when the objective function is smooth and strictly quasi-convex in $\mathbb{R}^n$, this algorithm needs $\mathcal{O}\left( \left(n D^2/\varepsilon^2 \right) \log\left(n D / \varepsilon\right)\right)$ comparison queries to find an $\varepsilon$-approximate of the optimal solution, where $D$ is an upper bound of the distance between all generated iteration points and an optimal solution.
    Keywords: quasi-convex function, gradient-free algorithm, smooth function, comparison oracle, normalized gradient descent
    Citation: Gasnikov A. V., Alkousa M. S., Lobanov A. V., Dorn Y. V., Stonyakin F. S., Kuruzov I. A., Singh S. R.,  On Quasi-Convex Smooth Optimization Problems by a Comparison Oracle, Rus. J. Nonlin. Dyn., 2024, Vol. 20, no. 5, pp.  813-825
    DOI:10.20537/nd241211

    Back to the list