AI is already changing our lives. From expert systems which predict the weather and the stock market, to facial recognition and internet search results, its application is growing more and more extensive all the time. Some uses of AI are relatively low risk, such as suggesting the next song to play on our Spotify playlist. Others are potentially life changing like predicting cancer from a scan, or if you’re a terrorist.

Some uses of AI seem low risk but have huge societal consequences, such as predicting posts in a Facebook feed. Optimising these models for maximum engagement has unintentionally led to incendiary posts being prioritised, and the massive proliferation of conspiracy theories. 

There has been a lot of publicity about the problems associated with trusting AI, and there is an active community of researchers and engineers who are working towards making AI more beneficial to humans. Briefly, the problems with AI come from creators of AI datasets and systems where ethical implications are not considered, and/or unintended biases in data and models are not mitigated.

Arguably, we should not be using AI at all for some purposes, e.g. to predict attractiveness from a portrait photo, but what is actually more of a problem is that models are trained on data, and they absorb biases from that data. This can lead to outcomes which are unfair, for example studies show that speech recognition systems work far worse on women’s voices. 

This is Human+

There is not one single solution to fixing AI, but one of the most important aspects of making AI safe and beneficial to humans is not to treat it as an isolated ‘black box’ expert. Instead if we put humans in the centre of a system which leverages AI when appropriate and under human supervision, we could harness the best aspects of both human and artificial intelligence.

At Aveni we call this human-centred AI: Human+. We design and investigate new forms of human-AI experiences and interactions that enhance and expand human capabilities for the good of our products, clients, and society at large. Ultimately AI’s long-term success depends upon our acknowledgement that people are critical in its design, operation, and use. We take an interdisciplinary approach that involves specialists in natural language processing, human-computer interaction, computer-supported cooperative work, data visualisation, and design in the context of AI.  

Adhering to the core value that Human+ is better than either human or AI in isolation, we develop novel user experiences and visualisations that foster human-AI collaboration. This helps fulfil artificial intelligence’s destiny: to be a natural extension of human intelligence, helping humans and organisations make wiser decisions. Human+ is a partnership in which people will take the role of specification, goal setting, high-level creativity, curation, and oversight. In this partnership, the AI augments human abilities through being able to absorb large amounts of low-level details, synthesise across many features and data points and do this quickly.

Our models are explainable to human operators, and we incorporate human feedback in the continual development of our models. 

Keeping humans in the loop 

Human-in-the-loop is a branch of AI that brings together AI and human intelligence to create machine learning (ML) models. It’s when humans are involved with setting up the systems, tuning and testing the model so the decision-making improves, and then actioning the decisions it suggests. The tuning and testing stage is what makes AI systems smarter, more robust and more accurate through use. 

With human-in-the-loop machine learning, businesses can enhance and expand their capabilities with trustworthy AI systems whilst humans set and control the level of automation. Simpler, less critical tasks can be fully automated, and more complex decisions can operate under close human supervision.

One of the key problems is that machine learning can take time to achieve a certain level of accuracy. It needs to process lots of training data to learn over time how to make decisions, potentially delaying businesses that are adopting it for the first time. 

Human-in-the-loop machine learning gives AI software the chance to shortcut the machine learning process. With human supervision, the ML can learn from human intelligence and deliver more accurate results despite a lack of data. That means having human-in-the-loop ML ensures your AI system learns and improves its results faster and any biases or blind-spots can be detected quickly and remedied. 

I am very excited about the potential to use AI and NLP to really benefit people, to give them access to more affordable and more reliable support and advice. These sophisticated tools come with some drawbacks which can be mitigated by taking a Human+ approach to system design, which includes making automation explainable, and incorporating user feedback. 

As these transformative technologies become increasingly adopted across all industries, affecting a myriad of critical functions in our world, we need to have a clearer understanding of the challenges and benefits that AI brings. A human-centric adoption of AI mitigates its worst drawbacks, and makes it more likely to have a beneficial impact. There is no competition between human and AI intelligence; both are needed. In fact, using AI to support humans to achieve higher levels of creativity, intuition, and insight is very exciting.