Empowering Blue Collar Workers With An Intelligent Personal Agent

In by contextere

Intelligent machines are becoming a vital part of empowering blue-collar workers with information. As the importance of this capability and its impact on the future of work continues to be recognized, we thought it would be a good time to take a technical dive into Natural Language Processing and Question Answering (QA) systems, Neural Networks for QA systems, chatbots vs intelligent assistants, and how these fit together in the contexteresolution for blue-collar workers.

Natural Language Processing

Natural Language Processing (NLP) is an overarching umbrella that has several components, one of which is QA. QA is, arguably, one of the most difficult challenges in NLP. This is because, in general, the two main components of QA systems are (1) understanding the meaning of a text and (2) the ability to reason over relevant facts [1]. Therefore, most of the other NLP problems such as part-of-speech tagging (POS), named entity recognition (NER) or sentiment analysis can be classified as a QA system. These systems have also recently been used to develop dialog systems to simulate human conversation [2].

In recent years, the challenges mentioned above are being addressed with advancements in computational power and increases in data. This transition has enabled the development of powerful machine learning (ML) systems and the ability to adapt these algorithms, specifically, deep learning models, to NLP problems such as machine translation, speech recognition and QA systems.

Neural Network Models for QA systems

The adaptation of Neural Networks (NN) to a wide range of tasks in NLP such as sentiment analysis, POS, machine translation and language modeling [3] demonstrates the power of NN in NLP applications.  In recent years, the advancements in deep neural network models, such as Gated Recurrent Unit (GRU) and Long Short Term Memory (LSTM) [4] networks, enable them to successfully adapt to solve some of the complex NLP problems, such as language modeling and machine translation. Deep LSTMs [5] have shown a remarkable ability to handle long sequences of data and embed them into vector representation, containing information to translate one language to another. These recurrent neural network (RNN) units have been shown in academic literature to be able to successfully handle QA problems. Also, improvement algorithms, such as attention mechanisms [6] and Dynamic Memory Networks [1], provide state-of-the-art performance for deep-learning-based QA.

The Distinction Between Intelligent Assistants and Chatbots

Although the terms intelligent assistant (or virtual assistant) and chatbot are sometimes used interchangeably, these are in fact different technologies. Chatbots often operate in a single-term exchange, where a user’s request is mapped to a specific task or information. [7] OK Google and Alexa are examples of this.

On the other hand, the intelligent assistant is prepared to provide information or help to the user that is based on the user’s current and future needs. This kind of intelligent assistant acts as a sort of mentor to the user. It is always present, is intimately familiar with the task and user, and can provide real-time help.

QA System at contextere

At contextere, we’re building an intelligent assistant for the industrial workforce. We’re using deep recurrent neural networks (DRNN) along with NLP techniques to help blue-collar workers perform jobs in field service and inspection. The QA algorithms in our intelligent assistant consider different types of inputs, such as user manuals text, tables, figures and historical data to answer questions generated by the user or by the context. Our unique approach to the implementation of deep learning models and NLP techniques will result in a state of the art solution to improve the productivity, competency, and safety of blue-collar workers in the future of work.

 

Bibliography

[1] Kumar, Ankit, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. “Ask me anything: Dynamic memory networks for natural language processing.” In International Conference on Machine Learning, pp. 1378-1387. 2016.

[2] Bahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Bengio. “Neural machine translation by jointly learning to align and translate.” arXiv preprint arXiv:1409.0473 (2014).

[3]Sutskever, Ilya, Oriol Vinyals, and Quoc V. Le. “Sequence to sequence learning with neural networks.” In Advances in neural information processing systems, pp. 3104-3112. 2014.

[4] Hochreiter, Sepp, and Jürgen Schmidhuber. “Long short-term memory.” Neural computation 9, no. 8 (1997): 1735-1780.

[5] Graves, Alex. “Supervised sequence labelling.” In Supervised sequence labelling with recurrent neural networks, pp. 5-13. Springer, Berlin, Heidelberg, 2012.

[6] Mnih, Volodymyr, Nicolas Heess, and Alex Graves. “Recurrent models of visual attention.” In Advances in neural information processing systems, pp. 2204-2212. 2014.

[7] Who’s talking? Conversational agent vs. chatbot vs. virtual assistant http://bit.ly/2EufGVh via @searchcio