Seminar Francois Durand: "Analysis of Approval Voting in Poisson Games"
Seminar Francois Durand: "Analysis of Approval Voting in Poisson Games"
– February 13, 2020
Titre :
Analysis of Approval Voting in Poisson Games
Résumé :
Poisson games were introduced by Roger Myerson to model large elections. In this framework, the number of players of each type follows a Poisson distribution. Typically, the expected number of players is considered large, and players base their strategic response on rare events, such as a "pivot" where two candidates are tied for victory, and where one vote can make the difference. The main part of the talk will be dedicated to presenting Poisson games and the main theoretical results about them. Then I will say a few words about our work on a voting rule called Approval voting.
References:
R. Myerson (2000). Large Poisson games. Journal of Economic Theory, 94, 7–45.
R. Myerson (2002). Comparison of scoring rules in Poisson voting games. Journal of Economic Theory, 103, 219–251.
F. Durand, A. Macé and M. Núñez (2019). Analysis of Approval Voting in Poisson Games. ACM Conference on Economics and Computation.
The design of efficient algorithms is closely linked to the evaluation of their performances. My work focuses on the use of stochastic models for the performance evaluation of large distributed systems. I am interested in developing tools that can characterize the emergent behavior of such systems and improve their performance. This leads me to solve stochastic control and optimization problems, notably through operations research methods. These problems suffer from combinatorial explosion: the complexity of a problem grows exponentially with the number of objects that compose it. It is therefore necessary to design models but also algorithmic processes whose complexity does not increase too rapidly with the size of the system.In this presentation, I will summarize a few of my contributions on the use and the refinements of mean field approxiamtion to study the performance of distributed systems and algorithms. I will introduce the key concepts behind mean field approximation, by giving some examples of where it can be applied. I will review some of the classical models and try to answer a very natural question: how large should a system be for mean field to apply?
Distributed Variational Representation Learning, by Abdellatif Zaidi (Huawei)
– February 11, 2020
We connect the information flow in a neural network to sufficient statistics; and show how techniques that are rooted in information theory, such as the source-coding based information bottleneck method can lead to improved architectures, as well as a better understanding of the theoretical foundation of neural networks, viewed as a cascade compression network. We discuss distributed architectures and illustrate our results and view through some numerical examples.
Seminar Francois Durand: "Analysis of Approval Voting in Poisson Games"
– February 13, 2020
Titre :
Analysis of Approval Voting in Poisson Games
Résumé :
Poisson games were introduced by Roger Myerson to model large elections. In this framework, the number of players of each type follows a Poisson distribution. Typically, the expected number of players is considered large, and players base their strategic response on rare events, such as a "pivot" where two candidates are tied for victory, and where one vote can make the difference. The main part of the talk will be dedicated to presenting Poisson games and the main theoretical results about them. Then I will say a few words about our work on a voting rule called Approval voting.
References:
R. Myerson (2000). Large Poisson games. Journal of Economic Theory, 94, 7–45.
R. Myerson (2002). Comparison of scoring rules in Poisson voting games. Journal of Economic Theory, 103, 219–251.
F. Durand, A. Macé and M. Núñez (2019). Analysis of Approval Voting in Poisson Games. ACM Conference on Economics and Computation.
During the last decades the importance of considering data uncertainty in optimization problems has become increasingly apparent, since small
fluctuations of input data may lead to comparably bad decisions in many
practical problems when uncertainty is ignored. If the probability
distribution of the uncertain data is not known (or cannot be sufficiently estimated), a common technique is to estimate bounds on the uncertain data (i.e., define uncertainty sets) and to identify optimal solutions that are robust against data fluctuations within these bounds. This approach leads to the robust optimization paradigm that allows to consider uncertain objectives and constraints [1].
Optimization problems where only the objective is uncertain arise, for instance, prominently in the analysis of social networks.
This stems from the fact that the strength of social ties (i.e., the amount of influence individuals exert on each
other) or the willingness of individuals to adopt and share information can, for example, only be roughly estimated based on observations.
A fundamental problem arising in social network analysis regards the identification of communities (e.g., work groups, interest groups), which can be modeled as a Dominant Set Clustering Problem [5,6,7] which in turn leads to a Standard Quadratic Optimization Problem (StQP); see [2]. Here the link strengths enter the objective while the constraints are familiar probability constraints, so that they can be considered certain.
Hence we investigate data uncertainty in the objective function of StQPs, considering different uncertainty sets, and derive implications for the complexity of robust variants of the corresponding deterministic counterparts. We can show that considering data uncertainty in a StQP results in another StQP of the same complexity if ellipsoidal, spherical or boxed uncertainty sets are assumed [4]. Moreover we discuss implications when considering polyhedral uncertainty sets, and derive rigorous bounds for this case, based upon copositive optimization [3].
References
[1] Ben-Tal A, El Ghaoui L, Nemirovski AS (2009) Robust optimization. Princeton Series in Applied Mathematics (Princeton NJ: Princeton University Press).
[2] Bomze IM (1998) On standard quadratic optimization problems. Journal of Global Optimization 13(4):369–387.
[3] Bomze IM (2012) Copositive optimization – Recent developments and applications. European Journal of Operational Research 216(3):509–520.
[4] Bomze IM, Kahr M, Leitner M. (2020) Trust your data or not - StQP remains StQP: Community Detection via Robust Standard Quadratic Optimization. To appear in Mathematics of OR.
[5] Pavan M, Pelillo M (2007) Dominant sets and pairwise clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence 29(1):167–172.
[6] Rota Bulò S, Pelillo M (2017) Dominant-set clustering: A review. European Journal of Operational Research 262(1):1–13.
[7] Rota Bul\`{o} S, Pelillo M, Bomze IM (2011) Graph-based quadratic optimization: A fast evolutionary approach. Computer Vision and Image Understanding 115(7):984–995.
Strategic information transmission with receiver's type-dependent decision sets, by Stephan Sémirat (GAEL, Grenoble)
– February 27, 2020
Strategic information transmission with receiver's type-dependent decision sets.
Abstract:
We consider a sender-receiver game, in which the sender has finitely many types and the receiver makes decisions in a bounded real interval. We assume that utility functions are concave, single-peaked and supermodular. After the cheap talk phase, the receiver makes a decision, which must fulfill a constraint (e.g., a participation constraint) that depends on the sender's type. Hence a necessary equilibrium condition is that the receiver maximizes his expected utility subject to the constraints of all positive probability types. This necessary condition may not hold at the receiver's prior belief, so that a non-revealing equilibrium may fail to exist. We propose a constructive algorithm that always achieves a partitional perfect Bayesian equilibrium
Title : Ranking Online Social Users by their Influence
Abstract:
In this talk I will introduce an original mathematical model to analyse the diffusion of posts within a generic online social platform. As a main result, using the developed model, we derive in closed form the probabilities that posts originating from a given user are found on the Wall and Newsfeed of any other inside the platform. By combining these probabilities we get a measure of per user influence on the entire network. This constitutes a new centrality measure which is more expressive than existing ones, in the sense that it combines the user position on the graph with the user posting activity. Comparisons with simulations show the accuracy of this model and its robustness with respect to the modelling assumptions. Furthermore, its application on large data traces from real platforms asserts its validity for real world applications, and the possibilities it opens for explaining real diffusion phenomena and predicting actual user influence.
Bio:
Anastasios Giovanidis received the Diploma degree in electrical and computer engineering from the National Technical University of Athens, Greece, in 2005, and the Dr. Ing. degree in wireless communications and information theory from the Technical University of Berlin, Germany, in 2010. He has been a postdoctoral fellow, first with the Zuse Institute Berlin, Germany (with Prof. Martin Grötschel), and later with INRIA, Paris, France (with Prof. François Baccelli). Since 2013 he is a permanent researcher of the French National Center for Scientific Research (CNRS, CR1). From 2013 until 2016 he was affiliated with the Télécom ParisTech CNRS-LTCI laboratory. Since 2016 he is affiliated with the computer science laboratory LIP6 of the Sorbonne University. He has served as the General co-chair for WIOPT 2017, CCDWN 2018, and GameNets 2019. His current research interests include performance analysis and optimisation of telecom and social networks, supported by data analysis and learning.
F. Falniowski: "Robust routes to chaos in congestion games: The effects of scale on learning dynamics"
– March 13, 2020
We study the effects of increasing the population size/scale of costs in congestion games and generalize recent results for the well known Multiplicative Weights Update dynamic to a large class of Follow-the-Regularized Leader dynamics (FoReL). We prove that even in simple linear congestion games with two parallel links as the population/scale increases, learning becomes unstable and (unless the game is fully symmetric) eventually Li-Yorke chaotic. Despite their chaotic instability, the dynamics provably converge in a time-average sense to an exact equilibrium for any choice of learning rate and any scale of costs.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.