## Overprotective Training Environments Fall Short at Testing Time Let Models Contribute to Their Own Training

Despite important progress, conversational systems often generate dialoguesthat sound unnatural to humans . We conjecture that the reason lies in their training and testing conditions . During training, they learn to generate utterance given the human dialogue history . On the other hand, duringtesting, they must interact with each other, and hence deal with noisy data .…

## Dependency Graph to String Statistical Machine Translation

We present graph-based translation models which translate source graphs intotarget strings . Source graphs are constructed from dependency trees with extralinks so that non-syntactic phrases are connected . We provide two implementations of the model with different restrictions so that source graphs can be parsed efficiently .…

## Token wise Curriculum Learning for Neural Machine Translation

Existing curriculum learning approaches to Neural Machine Translation (NMT) require sampling sufficient amounts of “easy” samples from training data at the early training stage . This is not always achievable for low-resource languages where the amount of training data is limited .…

## Local Interpretations for Explainable Natural Language Processing A Survey

This work investigates various methods to improve the interpretability of deep neural networks for natural language processing (NLP) tasks . We provide a comprehensivediscussion on the definition of the term \textit{interpretability} and itsvarious aspects at the beginning of this work .…

## Attribute Alignment Controlling Text Generation from Pre trained Language Models

Large language models benefit from training with a large amount of unlabeled text, which gives them increasingly fluent and diverse generation capabilities . But using these models for text generation that takes into account targetattributes, such as sentiment polarity or specific topics, remains a challenge .…

## Leveraging Unlabeled Data for Entity Relation Extraction through Probabilistic Constraint Satisfaction

We study the problem of entity-relation extraction in the presence ofsymbolic domain knowledge . Such knowledge takes the form of an ontology defining relations and their permissible arguments . Previous approaches set out to integrate such knowledge in their learning approaches either through self-training, or through approximations that lose the precise meaning of the logical expressions .…

## On the Impossibility of Post Quantum Black Box Zero Knowledge in Constant Rounds

We investigate the existence of constant-round post-quantum black-box zero-knowledge protocols for $\mathbf{NP$ . Main result points out a fundamental difference between post-Quantum and classical zero-knowledges . We conclude that unless we use non-black-box techniques or relax certain security requirements, the protocols for$\mathf{NP}$ exist ifand only if we usenon-black box techniques .…

## Round and Communication Balanced Protocols for Oblivious Evaluation of Finite State Machines

We propose protocols for obliviously evaluating finite-state machines . Neither party learns the other’s input, and states being visited are hidden from both . We present two different solutions to this problem, a two-party one and a setting with an untrusted butnon-colluding helper .…

## 3DMNDT 3D multi view registration method based on the normal distributions transform

The normal distributions transform (NDT) is an effective paradigm for the point set registration . This method is originally designed for pair-wiseregistration and it will suffer from great challenges when applied tomulti-view registration . The proposed method alternately implements data point clustering, NDT computing, and likelihood maximization until desired registration results are obtained .…

## Bootstrapped Self Supervised Training with Monocular Video for Semantic Segmentation and Depth Estimation

For a robot deployed in the world, it is desirable to have the ability ofautonomous learning to improve its initial pre-set knowledge . We formalize this as a bootstrapped self-supervised learning problem where a system is initiallybootstrapped with supervised training on a labeled dataset .…