Invited speakers

 

s200_marty.laforest.jpg

Marty LAFOREST (Université du Québec à Trois-Rivières),

Marty Laforest is full Professor of Linguistics at the Department of Lettres et communication sociale, Université du Québec à Trois-Rivières (Québec, Canada). She specializes in discourse analysis, sociolinguistics and pragmatics. She has written on talk at work, conflictual discourse, misunderstandings, and other phenomena related to the negotiation of meaning in talk-in-interaction. She is involved in research on forensic applications of discourse analysis since 2003.

Title : Assessing perception of danger: methodological aspects of threat analysis on social media short messages

Abstract :

With social medias such as Twitter, every citizen now has the possibility of expressing himself publicly on every subject. This freedom does not go without a dark side: bullying, harrassment and threatening are common. In order to deal with these phenomena, law enforcement agencies hire individuals to monitor these networks. Their job consists in detecting messages that might be an indication of a danger, e.g. of a threat to one or several citizens. They examine (among other types of messages posted on different medias) tweets containing one or several keywords from a list counting dozens of them. The tweets deemed worrisome are subject to investigation. Most of the studies in that field (including Oostdijk and van Halteren, 2013; Spitters and al., 2014) aim to improve algorithms allowing to reduce the number of messages to be processed "manually". Adopting a different perspective, based on discourse analysis, we seek to answer the following questions:

1) while reading tweets, do employees of law enforcement agencies assess danger in the same way as "ordinary" users of Twitter?

2) besides keywords, which pragmatic and linguistic characteristics of tweets are deemed worrisome (and therefore, investigated)?

These are important questions, because it seems difficult to improve automatic detection systems without a better understanding of the way employees of law enforcement agencies interpret tweets. Furthermore, the comparison with the perception of ordinary citizens could allow to understand whether and how the experience of investigation changes the assessment of danger. In order to answer these questions, we designed a perception test of danger (defined as what requires an immediate police intervention) that can be inferred from more or less hateful tweets containing at least one "sensitive" keyword, and whose targets are varied (known and unknown individuals, different ethnic or religious groups). Participants were divided in two groups: employees of law enforcement agencies in charge of surveillance of social medias and "ordinary" users of Twitter. The presentation will focus on the methodological aspects of the study. The first point will be the selection of the tweets to be tested. The choices made about the way they would be presented to the participants and the design of the test itself will then be discussed.

(This research is conducted with Francis FORTIN (Université de Montréal) and Geneviève BERNARD BARBEAU (Université du Québec à Trois-Rivières))

 

***********************************

 

 photo_julien_2_squared2.jpg

Julien VELCIN (University Lumière Lyon 2)


Julien Velcin is Professor of Computer Science at the University Lumière Lyon 2. He works at the ERIC Lab in the Data Mining & Decision team on topics related to artificial intelligence, machine learning and data mining. More precisely, his research aims at designing new models and algorithms to deal with complex data. One of his favorite application field is the analysis of topics and opinion conveyed through the social media.

Titre : Cultural Differences - Comparison of French and US Twitter Usage During Elections

Abstract

This talk aims to give the results of a study carried out to compare the political "mediascape" of Twitter in the US and France. The aim of this study was twofold: extract the communities in an unsupervised way based on the ReTweet network (structural relation), and describe these communities depending on their hashtags usage (behavioral relation).
 
In particular, we observed that the hashtags usage can be categorized in such a way it helps improving our understanding of the discovered communities. Besides we can setup an Inductive Logic Programming (ILP) formulation for selecting useful hashtags that will be used for describing the communities "behavior". The idea is to find out the minimal set of hashtags that are able to cover the maximum of communities.
 
All those methods are a first step to build explanations on top of machine learning algorithms (here, community detection based on clustering). This work has been done in collaboration with Ian Davidson and Yue Wu of the University of California at Davis, and Antoine Gourru, PhD student at the ERIC Lab.



Online user: 22