dc.creatorVera, Matías Alejandro
dc.creatorRey Vega, Leonardo Javier
dc.creatorPiantanida, Pablo
dc.date.accessioned2020-07-01T15:20:58Z
dc.date.accessioned2022-10-15T14:54:23Z
dc.date.available2020-07-01T15:20:58Z
dc.date.available2022-10-15T14:54:23Z
dc.date.created2020-07-01T15:20:58Z
dc.date.issued2019-02
dc.identifierVera, Matías Alejandro; Rey Vega, Leonardo Javier; Piantanida, Pablo; Collaborative Information Bottleneck; Institute of Electrical and Electronics Engineers; Ieee Transactions On Information Theory; 65; 2-2019; 787-815
dc.identifier0018-9448
dc.identifierhttp://hdl.handle.net/11336/108570
dc.identifierCONICET Digital
dc.identifierCONICET
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/4399225
dc.description.abstractThis paper investigates a multi-terminal source coding problem under a logarithmic loss fidelity which does not necessarily lead to an additive distortion measure. The problem is motivated by an extension of the information bottleneck method to a multi-source scenario where several encoders have to build cooperatively rate-limited descriptions of their sources in order to maximize information with respect to other unobserved (hidden) sources. More precisely, we study fundamental informationtheoretic limits of the so-called: 1) two-way collaborative information bottleneck (TW-CIB) and 2) the collaborative distributed information bottleneck (CDIB) problems. The TW-CIB problem consists of two distant encoders that separately observe marginal (dependent) components X1 and X2 and can cooperate through multiple exchanges of limited information with the aim of extracting information about hidden variables (Y1, Y2), which can be arbitrarily dependent on (X1, X2). On the other hand, in CDIB, there are two cooperating encoders which separately observe X1 and X2 and a third node which can listen to the exchanges between the two encoders in order to obtain information about a hidden variable Y. The relevance (figureof-merit) is measured in terms of a normalized (per-sample) multi-letter mutual information metric (log-loss fidelity), and an interesting tradeoff arises by constraining the complexity of descriptions, measured in terms of the rates needed for the exchanges between the encoders and decoders involved. Inner and outer bounds to the complexity-relevance region of these problems are derived from which optimality is characterized for several cases of interest. Our resulting theoretical complexityrelevance regions are finally evaluated for binary symmetric and Gaussian statistical models, showing theoretical tradeoffs between the complexity-constrained descriptions and their relevance with respect to the hidden variables
dc.languageeng
dc.publisherInstitute of Electrical and Electronics Engineers
dc.relationinfo:eu-repo/semantics/altIdentifier/url/https://ieeexplore.ieee.org/document/8543840
dc.relationinfo:eu-repo/semantics/altIdentifier/doi/http://dx.doi.org/10.1109/TIT.2018.2883295
dc.rightshttps://creativecommons.org/licenses/by-nc-sa/2.5/ar/
dc.rightsinfo:eu-repo/semantics/restrictedAccess
dc.subjectINFORMATION BOTTLENECK
dc.subjectRELEVANCE
dc.subjectCOMPLEXITY
dc.subjectRATE
dc.titleCollaborative Information Bottleneck
dc.typeinfo:eu-repo/semantics/article
dc.typeinfo:ar-repo/semantics/artículo
dc.typeinfo:eu-repo/semantics/publishedVersion


Este ítem pertenece a la siguiente institución