Dallmeier Perimeter Visual

In this article, made by Maximilian Sand, Teamleader Artificial Intelligence by Dallmeier Electronic, se ponen de relieve algunos principios básicos que permiten valorar la funcionalidad y beneficio del análisis de vídeo basado en Inteligencia Artificial.

Video analysis based on Artificial Intelligence (The) promises a quantum leap in technology with great benefit to the customer. But only if the critical user – or what is the same, informed- is able to evaluate the technology correctly.

For a long time now, video security technology incorporates procedures based on Artificial Intelligence. More and more new applications and products are using algorithms to deliver new analytics or make existing ones more robust..

The goal is a clear added value for users, and the results speak for themselves. If in the past, for example with classic image processing, it was a great effort to reliably recognize a tree moved by the wind as a false alarm, nowadays an AI does it without problems.

The essential distinguishing feature between image or video analysis with classic image processing and those with artificial intelligence is that algorithms are no longer just programmed but taught., with a lot of data.

Using this data, the system learns to recognize patterns and so on, For example, differentiate a tree from an intruder. But the concept of machine learning also poses new problems and challenges..

A prominent example for this is the differences in the quality of recognition of different ethnic groups., a problem that has even made headlines. Although the background is relatively simple: only when there is data in sufficient quantity and with sufficient diversity and equal distribution, an Artificial Intelligence can learn robustly.

Dallmeier Vergleichsbild KI Sommer-Winter

AI System Quality

All this leads to questions about the performance capacity of a system that uses artificial intelligence.. What measures are used to compare, For example, two procedures, different systems or manufacturers? What does it mean if a brochure promises for example "95% detection accuracy" or "reliable recognition"? How good is a precision of the 95%? And, in short, what is reliable recognition?

To do this, first of all, you have to understand how AI procedures can be evaluated. The first step is the specific definition by the application and the client of what "false" means and what "correct" means., especially in borderline cases. For example, in a system of recognition of persons, Should a detection be assessed as correct if the image or video does not see a real person but only an advertising poster with a person?

This and other parameters have to be established. Once this definition exists, you need a dataset that knows the correct results you expect.

The AI will analyze this dataset and determine the ratio of correct and false detections.. Mathematics provides the user with different metrics, such as sensitivity (proportion of expected detections that have actually been detected) or accuracy of accuracy (proportion of detections that are actually correct). So, the "quality" of AI, after all, is always a statistical statement about the evaluation dataset that has been used.

Summer or winter?

How useful this statement really is to the user or potential customer of a system depends on the distribution of the dataset.. An assessment can certify good detection performance. But if the dataset is based exclusively on images from summer months, this assessment has no informative value on the quality of AI in winter as light and weather conditions can differ considerably..

As a result, in general, statements about the quality of an AI analysis – particularly, those with concrete numbers such as "99.9%"-, take them with caution when not all the parameters are known. No knowledge of the dataset used, of the applied metric and other parameters, an unequivocal statement about the degree of representativeness of the result is not possible.

There can be no exact indications

Each system has its limits, Even, naturally, AI systems. So, knowing the limits is the basic requirement to make informed decisions. But also here, statistics and reality intersect, as you can see in the following example. An AI recognizes worse, logically, objects in the image/video the smaller they are.

The first question, that is posed to a user before the purchase of a system, is the maximum distance up to which objects can be detected, as it influences the number of cameras needed and, therefore, on the costs of the entire system. But indicating an exact distance is not possible. Simply, there is no value up to which the analysis provides results 100% correct and another value from which a detection is not possible.

An evaluation here is only able to provide statistics such as, For example, the accuracy of detection based on the size of the object.

Better to compare directly

In relation to the limits of the system, it has been chosen to describe, as far as possible, the limits of the system with specific minimum or maximum values: for example in product data sheets. Among those would be the minimum distance or a minimum resolution.

This is reasonable, as customers or installers need benchmarks to be able to evaluate the system. However, there is still a lot of uncertainty, for example if these limit values are indicated by the manufacturer rather conservatively or optimistically. The user will do well to always keep in mind that in video analysis there may be no clear and defined limits.

With each system it will be like this: also within certain parameters errors will occur and, at the same time, under good conditions, useful results can be deeded, even exceeded the limit.

Whether a user wants to determine the true quality of an AI-based analysis, this will only be possible through a direct comparison; the numbers and parameters of different manufacturers are too different. And, In addition, framework conditions and input, Of course, they have to be the same in all systems.

A real test with demo products, borrowed or similar, is a good possibility for it. Additionally, system performance is displayed right in the required use case.

Is, By the way, also the key when evaluating the performance of AI systems in general: depends entirely on the use in each case. This should be specified as accurately as possible. Then, with the right solution, it will be possible to give a real added value to the customer.

Maximilian Sand Teamleader IA DallmeierMaximilian Sand

Teamleader Artificial Intelligence by Dallmeier Electronic
 

 

 


You liked this article?

Subscribe to our RSS feed And you won't miss anything.

Other articles on
By • 5 Nov, 2021
• Section: Systems control, MAIN HIGHLIGHT, Detection, Grandstands, Video surveillance