TY - GEN
T1 - Learning Kolmogorov Models for Binary Random Variables
AU - Ghauch, Hadi
AU - Ghadikolaei, Hossein Shokri
AU - Skoglund, Mikael
AU - Fischione, Carlo
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/11/1
Y1 - 2020/11/1
N2 - We consider a set of binary random variables and address the open problems of inferring provable logical relations among these random variables, and prediction. We propose to solve these two problems by learning a Kolmogorov model (KM) for these random variables. Our proposed framework allows us to derive provable logical relations, i.e., mathematical relations among the outcomes of the random variables in the training set, and thus, extract valuable relations from that set. The proposed method to discover the logical relations is established using implication in mathematical logic, thereby offering a provable analytical basis for asserting these relations, unlike similar factorization methods. We also propose an efficient algorithm for learning the KM model and show its first-order optimality, despite the combinatorial nature of the learning problem. We illustrate our general framework by applying to recommendation systems and gene expression data. In recommendation systems, the proposed logical relations identify groups of items for which a user liking an item logically implies that he/she likes all items in that group. Our work is a significant step toward interpretable machine learning.
AB - We consider a set of binary random variables and address the open problems of inferring provable logical relations among these random variables, and prediction. We propose to solve these two problems by learning a Kolmogorov model (KM) for these random variables. Our proposed framework allows us to derive provable logical relations, i.e., mathematical relations among the outcomes of the random variables in the training set, and thus, extract valuable relations from that set. The proposed method to discover the logical relations is established using implication in mathematical logic, thereby offering a provable analytical basis for asserting these relations, unlike similar factorization methods. We also propose an efficient algorithm for learning the KM model and show its first-order optimality, despite the combinatorial nature of the learning problem. We illustrate our general framework by applying to recommendation systems and gene expression data. In recommendation systems, the proposed logical relations identify groups of items for which a user liking an item logically implies that he/she likes all items in that group. Our work is a significant step toward interpretable machine learning.
UR - https://www.scopus.com/pages/publications/85107797924
U2 - 10.1109/IEEECONF51394.2020.9443570
DO - 10.1109/IEEECONF51394.2020.9443570
M3 - Conference contribution
AN - SCOPUS:85107797924
T3 - Conference Record - Asilomar Conference on Signals, Systems and Computers
SP - 1204
EP - 1209
BT - Conference Record of the 54th Asilomar Conference on Signals, Systems and Computers, ACSSC 2020
A2 - Matthews, Michael B.
PB - IEEE Computer Society
T2 - 54th Asilomar Conference on Signals, Systems and Computers, ACSSC 2020
Y2 - 1 November 2020 through 5 November 2020
ER -