You are here

On data privacy for machine learning

Vicenc Torra, Maynooth University
12-1pm  15th Mar 2019

Abstract

Data privacy studies and develops methods and tools for avoiding the 
disclosure of sensitive information. Privacy models, computational 
definitions of privacy, permit to define when data and models are 
considered safe with respect to disclosure. Some can be seen as Boolean 
properties, other as quantitative measurements). Data protection 
mechanisms are defined to be compliant with privacy models, and to achieve a good trade-off between disclosure risk and data utility.

In this talk I will give an overview of privacy models and disclosure risk 
measures. I will use this overview to place our work in the area. This 
includes masking methods and disclosure risk assessment. I will discuss 
the use of machine learning for disclosure risk assessment, and present 
our research related to data privacy for machine learning.

Short Bio

Vicenc Torra is currently a professor at the Hamilton Institute at 
Maynooth University (Ireland). His fields of interests include data 
privacy, machine learning, approximate reasoning (fuzzy sets, fuzzy 
measures/non-additive measures and integrals), decision making, 
aggregation and integration.

He has written six books including "Modeling decisions" (with Y. Narukawa, 
Springer, 2007), "Data Privacy" (Springer, 2017), and "Scala: from a 
functional programming perspective" (Springer, 2017), and edited several 
books including "Data science in practice" (with A. Said, Springer, 2019) 
and "Non-additive measures: theory and applications" (with Y. Narukawa and M. Sugeno, Springer, 2014). He is founder and editor of the Transactions on Data Privacy, and started in 2004 the annual conference series Modeling Decisions for Artificial Intelligence (MDAI). His web page is: 
http://www.mdai.cat/vtorra.

Venue

Large Conference Room, O'Reilly Institute