AutoGL A Library for Automated Graph Learning

Automated Graph Learning (AutoGL) is the first library for automated machine learning on graphs . AutoGL is open-source, easy-to-use, and flexible to be extended . We propose a machine learning pipeline for graph data containing four modules:auto feature engineering, model training, hyper-parameter optimization, andauto ensemble .…

Velocity Skinning for Real time Stylized Skeletal Animation

We propose asimple, real-time solution for adding secondary animation effects on top of standard skinning . Our method takes a standard skeleton animation as input, along with skin mesh and rig weights . It then derives per-vertex deformations from the different linear and angularvelocities along the skeletal hierarchy .…

Compressive Neural Representations of Volumetric Scalar Fields

We present an approach for compressing volumetric scalar fields usingimplicit neural representations . Our approach represents a scalar field as alearned function, wherein a neural network maps a point in the domain to an output scalar value . By setting the number of weights of the neural network to be smaller than the input size, we achieve compressed representations of scalarfields .…

Unsupervised Learning of Explainable Parse Trees for Improved Generalisation

Recent RvNN-based models fail to learn simple grammar and meaningful semantics in their intermediate treerepresentation . In this work, we propose an attention mechanism over Tree-LSTMsto learn more meaningful and explainable parse tree structures . We alsodemonstrate the superior performance of our proposed model on natural languageinference, semantic relatedness, and sentiment analysis tasks .…

WEC Deriving a Large scale Cross document Event Coreference dataset from Wikipedia

Cross-document event coreference resolution is a foundational task for NLP applications involving multi-text processing . Existing corpora for this task are scarce and relatively small, while annotating only modest-size clusters of documents belonging to the same topic . We present an efficient methodology for gathering a large-scale dataset for cross-document coreference from Wikipedia, where coreference links are restricted within predefined topics .…

TedNet A Pytorch Toolkit for Tensor Decomposition Networks

TedNet is based on the Pytorch framework, to give more researchers a flexible way to exploit TDNs . TedNet implements 5 kinds of tensor decomposition(i.e.,CANDECOMP/PARAFAC(CP), Block-Term Tucker(BT), Tucker-2, Tensor Train(TT) andTensor Ring(TR) on traditional deep neural layers, the convolutional layer and the fully-connected layer .…

Classical quantum network coding a story about tensor

Kobayashi et al. showed how to convert any network coding protocol into a quantum coding protocol . They left open whether existence of quantum network coding protocols implied the existence of a classical one . We characterize the set of distribution tasks achievable with non zeroprobability for both classical and quantum networks.…

A Hybrid Parallelization Approach for Distributed and Scalable Deep Learning

Deep Neural Networks (DNNs) have recorded great success in handling medical and other complex classification tasks . As the sizes of a DNN model and the available dataset increase, the training process becomes more computationally intensive . We have proposed a generic full end-to-end hybridparallelization approach combining both model and data parallelism forefficiently distributed and scalable training of DNN models .…

The Many Faces of 1 Lipschitz Neural Networks

Lipschitz constrained models have been used to solve specifics deep learning problems such as the estimation of Wasserstein distance for GAN, or the training of neural networks robust to adversarial attacks . Despite being empirically harder to train, they are theoretically better grounded than unconstrained ones when it comes to classification .…

Fine Grained Attention for Weakly Supervised Object Localization

Recent advances in deep learning accelerated an improvement in the supervised object localization task . We propose a novel residual fine-grained attention (RFGA) module that autonomously excites the less activated regions of an object . Unlike other attention-based WSOL methodsthat learn a coarse attention map, our proposed RFGA learns fine-Grained values in an attention map by assigning different attention values for each of the elements .…

How Should Network Slice Instances be Provided to Multiple Use Cases of a Single Vertical Industry

There are a large number of vertical industries implementing multiple usecases, each use case characterized by diverging service, network, andconnectivity requirements such as automobile, manufacturing, power grid, etc. Such heterogeneity cannot be effectively managed and efficiently mapped onto asingle type of network slice instance (NSI) Both approaches tackle the same technical issue ofprovisioning, management, and orchestration of per vertical per use case NSIs in order to improve resource allocation and enhance network performance .…

Secure Cognitive Radio Communication via Intelligent Reflecting Surface

In this paper, an intelligent reflecting surface (IRS) assisted spectrumsharing underlay cognitive radio (CR) wiretap channel (WTC) is studied . We aim at enhancing the secrecy rate of secondary user in this channel subject tototal power constraint at secondary transmitter (ST), interference powerconstraint (IPC) at primary receiver (PR) and unit modulus constraint atIRS .…

Conversational Semantic Role Labeling

Semantic role labeling (SRL) aims to extract arguments for each predicate in an input sentence . Traditional SRL can fail to analyze dialogues because it only works on every single sentence, while ellipsis and anaphora frequentlyoccur in dialogues . To address this problem, we propose the conversational SRL task, where an argument can be the dialogue participants, a phrase in thedialogue history or the current sentence .…

A tight negative example for MMS fair allocations

Kurokawa, Procaccia and Wang [JACM, 2018] present instances for which every allocation gives some agent less than her maximinshare . For three agents and nine items, we design an instance in which at least one agent does not get more than a $1 – \frac{1}{n^4}$ fraction of her maximin share .…