BT recently announced that it would cut 55,000 of its workforce, of which around 11,000 are related to the use of Artificial Intelligence (AI). The remaining reduction was due to business efficiencies, such as replacing copper cables with more reliable fiber optic alternatives.
Talk about AI raises many questions about its impact on the wider economy: which jobs will be most affected by the technology, how will these changes occur, and how will these changes be felt?
The development of technology and its effect on job security has been a recurring theme since the Industrial Revolution. Where mechanization was once a cause for concern about job loss, today it is more capable AI algorithms. But for many or most job categories, retaining humans will continue to be important for the foreseeable future. The technology behind this current revolution is primarily what is known as a Large Language Model (LLM), which is capable of providing relatively human-like responses to queries. It is the basis for OpenAI’s ChatGPT, Google’s Bard system, and Microsoft’s Bing AI.
These are all neural networks: mathematical computing systems modeled crudely on the way nerve cells (neurons) fire in the human brain. These complex neural networks are often trained on – or familiarized with – text retrieved from the Internet.
The training process enables a user to ask a question in conversational language and break down the question into components for the algorithm. These components are then processed to generate a response that is appropriate to the question asked.
The result is a system capable of intelligently answering any question that is asked. The implications are much broader than they may seem.
man in the loop
Just as GPS navigation for a driver can replace the need for them to know a route, AI provides workers with the opportunity to have all the necessary information at their fingertips, without “Googling”.
Effectively, this removes humans from the loop, meaning that any situation where a person’s job involves looking at an object and making a connection between them could be at risk. The most obvious example here are call center jobs.
However, it is possible that members of the public will not accept AI solving their problems, even if call waiting times are greatly reduced.
Any manual job has a very remote risk of substitution. While robotics is becoming more capable and dexterous, it operates in a highly constrained environment. It relies on sensors feeding it information about the world and then making decisions on this incomplete data.
AI is not yet ready for this field, the world is a messy and uncertain place in which adaptable humans excel. Plumbers, electricians and complex jobs in manufacturing – for example, automotive or aircraft – face little or no competition in the long run. Duration.
However, the real impact of AI is likely to be felt in terms of efficiency savings rather than outright job replacement. The technology is likely to gain quick traction as an assistant to humans. This is already happening, especially in domains like software development.
Instead of using Google to find out how to write a particular code, it is more efficient to ask ChatGPT. The solution that comes back can be tailored strictly to an individual’s needs, delivered efficiently and without unnecessary details.
safety-critical system
This type of application will become more common as AI devices of the future become true intelligent assistants. Whether companies use this as an excuse to reduce the workforce depends on their workload.
As the UK continues to struggle with a shortage of STEM (science, technology, engineering and maths) graduates, particularly in disciplines such as engineering, it is unlikely that there will be a loss of jobs in this sector, a more efficient way of dealing with the current Work load
It depends on employees making the most of the opportunities that technology provides. Naturally, there will always be skepticism, and the adoption of AI in the development of safety-critical systems, such as medicine, will take a long time. This is because trust in the developer is important, and the simplest way to instill it is to have a human at the center of the process.
This is important, because these LLMs are trained using the Internet, so biases and errors are built in. These may arise incidentally, for example, through a person’s exposure to a particular event simply because they share the same name as someone else. More seriously, they may even be of malicious intent, intentionally allowing training data to be presented that is false or intentionally misleading.
Cyber security becomes a growing concern as systems become more networked, as does the source of the data used to build AI. LLMs rely on open information as a building block that is refined by conversation. This opens up the potential for new ways to attack the system by deliberately creating falsehoods.
For example, hackers can create malicious sites and place them in places where they are likely to be picked up by AI chatbots. Due to the need to train the system on a lot of data, it is difficult to verify that everything is correct.
This means that, as workers, we need to harness the potential of AI systems and use them to their full potential. This means always questioning what we get from them, rather than blindly trusting their output. The term brings to mind the early days of GPS, when the system often took users on roads unsuitable for their vehicles.
If we apply a skeptical mindset about using this new tool, we will maximize its potential while simultaneously increasing the workforce – as we have seen through all previous industrial revolutions.
.
This news is auto-generated through an RSS feed. We don’t have any command over it. News source: Multiple Agencies: hindustantimes, techrepublic, computerweekly,