AI is already changing our lives; from expert systems which predict the weather and the stock market, to facial recognition and internet search results. Its application is growing more extensive. Some uses of AI are relatively low risk, such as suggesting the next song to play on our Spotify playlist. Others are potentially life changing like predicting cancer from a scan or identifying a terrorist.
Some uses of AI seem low risk but have huge societal consequences, such as predicting posts in a Facebook feed. Optimising these models for maximum engagement has unintentionally led to incendiary posts being prioritised. It has also led to the massive proliferation of conspiracy theories. Join our webinar on October 13 specially for Board and Exec team members to learn about the practical applications of AI in financial services, its challenges, data ethics and what to expect at each stage of the AI maturity curve.
There has been a lot of publicity about the problems associated with trusting AI. There is an active community of researchers and engineers who are working towards making AI more beneficial to humans. Briefly, the problems with AI come from creators of AI datasets and systems where ethical implications are not considered, and/or unintended biases in data and models are not mitigated.
Arguably, we should not be using AI at all for some purposes, e.g. predicting attractiveness from a portrait photo. However, what is more of a problem is that models are trained on data, and they absorb biases from that data. This can lead to outcomes which are unfair. For example, studies show that speech recognition systems work far worse on women’s voices.
This is Human+
There is not one single solution to fixing AI. However, one of the most important aspects of improving the reliability of AI is making it safe and beneficial to humans. It should not treat it as an isolated ‘black box’ expert. Instead, if we put humans in the centre of a system which leverages AI when appropriate and under human supervision, we could harness the best aspects of both human and artificial intelligence.
This human-centred AI is known as “human-in-the loop”. At Aveni, we call it Human+. Our free whitepaper discusses this concept in full details. We design and investigate new forms of human-AI experiences and interactions. This enhance and expand human capabilities for the good of our products, clients, and society at large. Ultimately, the impacts of AI and its long-term success depends upon our acknowledgement that people are critical in its design, operation, and use. We take an interdisciplinary approach that involves specialists in natural language processing, human-computer interaction, computer-supported cooperative work, data visualisation, and design in the context of AI.
We adhere to the core value that human-in-the loop is better than either human or AI in isolation. In so doing, we develop novel user experiences and visualisations that foster human-AI collaboration. This helps fulfil its destiny: to be a natural extension of human intelligence, helping humans and organisations make wiser decisions. Human+ is a partnership in which people will take the role of specification, goal setting, high-level creativity, curation, and oversight. In this partnership, the AI augments human abilities through being able to absorb large amounts of low-level details. It also synthesise across many features and data points and do this quickly.
Our models are explainable to human operators, and we incorporate human feedback in their continual development.
Keeping humans in the loop
Human-in-the-loop is a branch of AI that brings together AI and human intelligence to create machine learning (ML) models. It’s when humans are involved with setting up the systems, tuning and testing the model. This improves decision-making, and an actioning of the decisions it suggests. The tuning and testing stage is what makes AI systems smarter, more robust and more accurate through use.
With human-in-the-loop machine learning, businesses can enhance and expand their capabilities with trustworthy AI systems whilst humans set and control the level of automation. Simpler, less critical tasks can be fully automated, and more complex decisions can operate under close human supervision.
One of the key problems is that machine learning can take time to achieve a certain level of accuracy. It needs to process lots of training data to learn over time how to make decisions, potentially delaying businesses that are adopting it for the first time.
Read the full article here.