Perceived Trustworthiness of Natural Language Generators
Natural Language Generation tools, such as chatbots that can generate human-like conversational text, are becoming more common both for personal and professional use. However, there are concerns about their trustworthiness and ethical implications. The paper addresses the problem of understanding how different users (e.g., linguists, engineers) perceive and adopt these tools and their perception of machine-generated text quality. It also discusses the perceived advantages and limitations of Natural Language Generation tools, as well as users' beliefs on governance strategies. The main findings of this study include the impact of users' field and level of expertise on the perceived trust and adoption of Natural Language Generation tools, the users' assessment of the accuracy, fluency, and potential biases of machine-generated text in comparison to human-written text, and an analysis of the advantages and ethical risks associated with these tools as identified by the participants. Moreover, this paper discusses the potential implications of these findings for enhancing the AI development process. The paper sheds light on how different user characteristics shape their beliefs on the quality and overall trustworthiness of machine-generated text. Furthermore, it examines the benefits and risks of these tools from the perspectives of different users.
READ FULL TEXT