The closed loop between opinion formation and personalised recommendations by Paolo Frasca (gipsa)
The closed loop between opinion formation and personalised recommendations by Paolo Frasca (gipsa)
– October 25, 2018
Title: The closed loop between opinion formation and personalised recommendations
Abstract:
The literature on social dynamics contains many examples of mathematical models of opinion for- mation, that capture the influence between individuals and the effects of partisan media. Nowadays, a large part of social interactions and information gathering happens online, where it is mediated by recommender systems that guide the users to relevant content. An excessive personalisation might however artificially distort user’s perception and exacerbate opinion polarisation: while experimental research has investigated the issue, no mathematical model features the active role played by recommender systems. In this work, we aim to close this gap by making explicit the feedback loop between opinion formation and the recommendation of personalised contents. We focus on a single user interacting with an idealised online news aggregator, with the uses having the tendency to prefer confirmatory news. We define metrics for the opinion polarisation and news aggregator efficiency, and perform both extensive numerical simulations and a mathematical analysis. We find that personalised contents and confirmation bias contribute to opinion polarisation.
A multi-criteria scheduling heuristic in multicores, by Alain Girault (Spades)
– October 11, 2018
Title:
A multi-criteria scheduling heuristic to optimize the execution
time, reliability, power consumption, and temperature in multicores
Abstract :
We address the problem of computing a static schedule of a DAG of tasks
onto an multicore architecture, with the goal of optimizing four
criteria: the execution time, the failure rate, the maximum power
consumption, and the peak temperature. We propose two methods. The
first is a ready list scheduling heuristic called ERPOT (Execution
time, failure Rate, POwer consumption and Temperature): it builds a
static schedule of the given DAG of tasks onto the given multicore such
that its failure rate, power consumption, and temperature remain below
three given thresholds, and such that its total execution time is as
low as possible. ERPOT replicates actively the tasks to decrease the
failure rate, uses Dynamic Voltage and Frequency Scaling to decrease
the power consumption, and inserts cooling times to control the peak
temperature. The second method uses an Integer Linear Programming (ILP)
program to compute an optimal schedule.
We advocate that, when one wants to optimize multiple criteria, it
makes more sense to build a set of solutions, each one corresponding to
a different tradeoff between those criteria, rather than to build a
single solution. This is all the more true when the criteria are
antagonistic, which is the case here: for instance, improving the
failure rate requires to add some redundancy in the schedule (in our
case spatial redundancy), which penalizes the execution time. For this
reason, we use ERPOT to build a Pareto front in the 4D space (exec.
time, fail. rate, power, temp.), by varying the three thresholds on the
failure rate, power, and temperature. Our experimental comparisons show
that the schedules produced by ERPOT are on average only 10% worse than
the optimal schedules computed by our ILP program, and that ERPOT
outperforms the PowerPerf-PET heuristic from the literature by at least
35%.
Title. Anisotropic textures and lines within images: Analysis, synthesis and super-resolution
Abstract. Analysis and synthesis of numerical images hold the interest in many areas such as medical imaging, astrophysics and biology. The objectives of image processing are to improve the visual content, or to extract relevant information. In this talk, we are interested in the characteristics of orientation within images. Two types of oriented structures have been studied: linear geometric structures and texture images, involving different deterministic and random mathematical tools.
- Texture modeling is a challenging issue of image processing: while it is difficult to give a strict mathematical definition of textures, most proposed methods tackle their analysis and synthesis using random fields. In many cases, the model has to incorporate some important features of the data as roughness or anisotropy properties. We propose in this stochastic framework two new models which enable to control both the local orientation and roughness of the texture. We study theoretical results of these models and describe their properties. We also give a sense to the notion of orientation for a large class of Gaussian random fields.
- Geometric structures, such as edge or lines, have to be detected for identifying objects in images. We present a new convex formulation for the problem of recovering lines in degraded images. Following the recent paradigm of super-resolution, which uncover fine scale information lost in the data beyond the Nyquist resolution limit, we formulate a dedicated atomic norm penalty and we solve this optimization problem by means of a primal-dual algorithm. This parsimonious model enables the reconstruction of lines from lowpass measurements, even in presence of a large amount of noise or blur. Furthermore, a Prony method performed on rows and columns of the restored image, provides a spectral estimation of the line parameters, with subpixel accuracy.
The closed loop between opinion formation and personalised recommendations by Paolo Frasca (gipsa)
– October 25, 2018
Title: The closed loop between opinion formation and personalised recommendations
Abstract:
The literature on social dynamics contains many examples of mathematical models of opinion for- mation, that capture the influence between individuals and the effects of partisan media. Nowadays, a large part of social interactions and information gathering happens online, where it is mediated by recommender systems that guide the users to relevant content. An excessive personalisation might however artificially distort user’s perception and exacerbate opinion polarisation: while experimental research has investigated the issue, no mathematical model features the active role played by recommender systems. In this work, we aim to close this gap by making explicit the feedback loop between opinion formation and the recommendation of personalised contents. We focus on a single user interacting with an idealised online news aggregator, with the uses having the tendency to prefer confirmatory news. We define metrics for the opinion polarisation and news aggregator efficiency, and perform both extensive numerical simulations and a mathematical analysis. We find that personalised contents and confirmation bias contribute to opinion polarisation.
Abstract:
Efficiently exploiting the resources of data centers is a complex task that requires efficient and reliable load balancing and resource allocation algorithms. The former are in charge of assigning jobs to servers upon their arrival in the system, while the latter are responsible for sharing the server resources between their assigned jobs. These algorithms should adapt to various constraints, such as data locality, that restrict the feasible job assignments. In this présentation, we propose a token-based algorithm that efficiently balances the load between the servers without requiring any knowledge on the job arrival rates and server capacities. Assuming a balanced fair sharing of the server resources, we show that the resulting dynamic load balancing is insensitive to the job size distribution. Its performance is compared to that obtained under the best static load balancing and in an ideal system that would constantly optimize the resource utilization. We also make the connection with other token-based algorithms such as Join-Idle-Queue.
Conception et analyse des systèmes IoT : les promesses de l’approche systèmes complexes, by Pascale Primet (Inria, Lyon)
– November 22, 2018
L’Internet des Objets (IoT) est l'extension d’Internet à des dispositifs et à des lieux du monde physique. Cette extension ouvre de nombreuses opportunités techniques et économiques mais, comme toute technologie nouvelle, apporte de multiples questions scientifiques, environnementales, éthiques and sociétales. Ainsi, malgré l’explosion du nombre de déploiements “internet des objets” à des fins de preuves de concepts, la mise en oeuvre opérationnelle de l’IoT est freinée par la lenteur des décisions, la complexité et les risques de la technologie et de ses applications.
Nous pensons que la théorie des systèmes complexes est un outil particulièrement pertinent pour étudier ces questions difficiles et aider à la conception et à l’analyse des systèmes et des services d’objets connectés.
Dans cet explosé, nous illustrons, au travers d’exemples concrets, les caractéristiques spécifiques “système complexe” des environnements IoT. Nous présentons ensuite le modèle conceptuel basé sur des agents, que nous développons dans la plate-forme StackiLab pour caractériser et modéliser un système IoT dynamique afin d’en construire un “jumeau numérique” personnalisé et évolutif. Nous montrons comment la simulation de scénarios basés sur l'instantiation de ce modèle permettrait la mise en perspective de différentes solutions d’architecture, de dimensionnement, de planification, de choix technologiques ou d’étudier, à partir d’une même représentation abstraite, des questions orthogonales telles que la rentabilité économique, l’analyse des risques, l’analyse des performances.
Enfin nous ouvrons la discussion sur la complémentarité et la possible integration de cette approche avec les méthodes classiques de modélisation et d’analyse de réseaux et de systèmes distribués pour affiner notre compréhension de ces systèmes.
Deep learning architectures and training methods by Loris Felardos (Inria + IBPC)
– November 29, 2018
The past few years have seen a dramatic increase in the performance of deep learning architectures applied to fields ranging from computer vision and speech recognition to bio-informatics and drug design. This presentation will consist in three parts. Part 1 is an gentle introduction to the basic ideas that are crucial for training deep neural networks (like logistic regression, SGD and optimization methods). Part 2 focuses on the most common building blocks (convolutions, attention layers and skip connections) of practical neural architectures such as recurrent neural networks, generative models and the more recent graph convolutional networks. Finally, part 3 insists on the importance of carefully designed loss functions across a range a different training methods (may it be for supervised, semi-supervised or unsupervised learning).
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.