Standardised norms in offing to check AI systems, robustness

0
48

Following the development of standards, businesses using AI technologies will be graded according to a number of factors, such as how quickly the system can be restored in the event of virus, hacking, or other malfunction.Companies using AI technologies will be graded based on a number of factors after the standards are developed, such as how quickly the system can be restored in the event of virus, hacking, or malfunction.

Following the development of standards, businesses using AI technologies will be graded according to a number of factors, such as how quickly the system can be restored in the event of virus, hacking, or other malfunction. According to official sources, businesses may first be required to self-certify their systems and offer ratings.

Since AI technology is rapidly being employed in practically all industries with consumer interfaces and there is always a risk of systems being hacked and producing false results, harming consumers, a defined set of norms is necessary. For example, autonomous vehicles rely on their ability to make decisions in real time, such as recognising traffic signals and knowing when to stop, start, and so on. Similarly, AI connectivity is essential to automated parking systems. If one of the many telecom network-connected systems is compromised, everything might go awry. Uniform guidelines are required in these situations to evaluate the amount of time needed for recovery and the necessary interim arrangements.

The Telecommunication Engineering Centre (TEC) is currently collaborating with the industry to produce a report that will cover a variety of topics, including how standards will be developed, what applications can be expected in the future for telecom and digital infrastructure networks, what metrics should be taken into account when assessing the robustness of AI systems, and what role the government can play in ensuring AI system robustness.