Machine generated jokes fail to match human-created humor . Machine-generated jokes have remained a difficult task, with machine generated jokes failing to match that of humans . We focus our experiments on the most popular dataset, included in the 2020 SemEval{‘}s Task 7, and teach our model to take normal text and translate it into humorous text . We evaluate our model compared to humorous human generated headlines, finding that our model is preferred equally in A/B testing with the human edited versions, a strong success for humor generation . We also show that our . model is assumed to be human written comparable with that of the human . edited headlines and is significantly better than random, indicating that this . dataset does indeed provide potential for future humor generation systems. We also find that this dataset provides potential for . future humor production systems.

Links: PDF - Abstract

Code :


Keywords : humor - human - generated - generation - model -

Leave a Reply

Your email address will not be published. Required fields are marked *