AUTOMATION FOR EVERYONE
One of the greater implications in the use of drones for inspection is the vast number of images that are collected. The time and efforts that it takes to properly analyze thousands of images and verify problems increase the fixed costs.
In the best scenario, we describe a process in which images will be uploaded to a server and the software will find, mark, classify damages and generate a report for you to deliver down the value chain within minutes.
Undoubtedly, drone operators and their clients can advocate for this technology, mainly since it can reduce the fixed cost that is involved in drone inspections and as a result increase the demand.
Our research and interviews with clients, show that most of the utility companies are interested to increase their inspection rate and are reluctant due to high costs involved. This technology can reduce the costs of inspections and increase the demand for those services.
Data analysis: What do we need to reach to automation
One of the common problems that Machine Learning could solve is the ability of algorithms to automatically classify images that exhibit a certain problem, or anomaly. In order to teach a computer algorithm to find an element in an image, at commercial grade accuracy, we must provide the algorithm with the training set. A set of images that will ‘clarify’ to the computer what should it look for, or what it may ignore.
The first implication that this condition opposes is the need for a high-quality training set of images, in fact, thousands of images that were manually verified as relevant by humans. Those images are the instructions for the algorithm. It will try to recognize patterns that will describe the learning goal.
Companies that attempt to achieve usable accuracy must get a hold of this data. Most likely by collaborating with someone who has the access and resources to analyze that huge amount of data.
To explain the extent of the problem, take for example a broken insulator on high voltage power line. The number of damages and faults that could occur are numerous, and for each type, we need set of learning images, that would have to be analyzed manually and classified as learning goal for the algorithm.
We are talking about data that is large in quantity but also with wide variance too.
And frankly… it’s hard to get.
So, how close are we to automation?
In short, very close.
Some objects and issues are easier for Machine Learning to identify than others. For example, it is easy for them to identify heat spots when using thermal inspections.
Here, at Scopito we currently have a working algorithm that can identify insulators.
Scopito has entered to a collaboration with one of the most powerful computers in the world – IBM’s Watson! The collaboration with IBM provides us with access to Watson – extremely powerful supercomputer that could reduce the time it is taking to prototype algorithms.
(Read more about it here – IBM Watson)
It is safe to guess that the development and the progress in this filed will follow the drone industry. The more inspections are made, the more excess of relevant data there is, the better algorithms can be trained.
The main challenges the industry is facing seem to be of the following:
•Lack of high-quality data.
•The manpower to analyze the data manually to improve learning set.
•Elements of interest that doesn’t stand out so clearly from the rest of the image are harder for computers to spot than for humans.
•Various lighting conditions and camera settings that fluctuate in every image, opposing challenges to recognize objects even if the algorithm has seen them before.
•The capturing position of the drone, the angles, distractions and incomplete images can greatly affect accuracy.
•Data that is added to a training set must be carefully selected to ensure accuracy. Providing the algorithm with ‘garbage’ will result in poor performance.
•Lack of experience personal for research in the field.
Why we’ll never reach full automation…
In short, because the human advantages are too vital for this process.
Specialists and trained repair crews know exactly which part of the assets to focus on, and where not to spend resources. Our human brain can grasp abstract concepts and allow us to be less prone to noise and have a less rigid understanding of objectives.
As a result, we can adapt quickly. Think about the costs of introducing new upgraded hardware that looks far off compare to the current one. We would have to teach the algorithm, all over again how the objectives look like.
See article Image Analysis – Human Style,
“An algorithm can only do what it’s been told to do: Look for dark spots on isolators, perhaps, indicative of equipment in danger of failing. But the utility company may then decide it needs to pinpoint something else on the same batch of images, maybe buildings under the power lines.
An algorithm would then have to be written and tested before this could be done. A human analyst can just be told to look for buildings and mark all images with buildings in them. It’s the work of minutes, literally, to change the search parameters this way.
But, of course, we are working towards stronger, faster automation, and all information gathered in all our projects helps us to learn more and to better train our analysts – and in the future, to form the basis of true AI image analysis.”