Using Image Processing To Count Artillery Craters in Ukraine

A basic and simple approach to get a sense of the damage

Tim Chinenov
6 min readJun 27, 2022

--

As someone with a diverse lineage of Easter European ancestry, the war in Ukraine hits close to home. I’m continuously monitoring news as it comes from the front lines and hearing from friends and relatives various updates. I was recently passed a set of images of some of the damage Russian artillery has done in Eastern Ukraine and was requested to use some of my image processing experience to estimate the number of craters in these images.

Admittedly, I haven’t done much image processing work since I left Tesla roughly two years ago. As a result, my approach was fairly experimental and rudimentary. Yet, I hope this approach demonstrates just how much information can be scrapped with a bit of image processing know-how. So let’s dust off those old image processing lecture notes and get to it.

Method

The photo I will be using throughout this example is part of a series of satellite images taken by Maxar Technologies. The company has been publishing photos throughout the War. I have attached a few more of their examples in the Github repository at the end of this article.

A field of craters in Eastern Ukraine

From the get-go, I could tell this photo has several elements that would make running any image processing algorithm across the whole image tricky. There are tree lines, roads, and some clouds that are interfering with parts of the image. To better isolate the clean regions of the image, I performed some basic spatial transformations.

The bottom left corner of the image.

I rotated the image by roughly -45 degrees and took just about half of the image. This is already a much cleaner picture to experiment with. I wanted to work my way towards a binarized image — an image that only has black and white pixels. To start, we need to simplify the image into a grayscale representation. Instead of taking a traditionally computed grayscale, I first examined if any of the specific color channels would be more useful.

The blue, green, and red color channels respectively.

I broke out the image into it’s color channels and examined which would give me the greatest color contrast between the craters and the surrounding environment. This was probably the most experimental part of the process. I tried the remaining parts of this algorithm on each color channel and found that the green color channel gave the best results.

On the green color channel I applied a 7x7 Gaussian blur across the image to reduce the overall noise. Without this filter, further processing is more likely to pick up photograph artifacts and details that we don’t want interfering with our results.

We can see after blurring the image, we reduce the intrinsic grain in the image that comes from the camera quality.

There is one more operation to perform before we have our binary mask. This is to perform a histogram equalization, or a contrast stretch. This approach will increase the contrast of our image, which is useful when we want to increase the color derivative along pixel edges.

The plot of land after being equalized. We can see the color histogram on the right, where the x-axis represents a pixel intensity vale (0 to 255) and the y-axis is the number of total pixels in the image with that intensity value.

The benefits of the contrast stretch are clear. We can see in the above image that we have a much greater distinction between the epicenter of the craters and the surrounding halos of dirt. Even the craters that are not surrounded by the pale halos (I assume these are a different type of shell) are darker than the surrounding farmland.

Finally, we can binarize our image. The histogram gives a range of good values to threshold our image with. We can see a clear inflection point around the 35 pixel intensity mark. The threshold produces the mask below, where all pixels with intensity 35 and less are converted to a value of 1 (or 255) and all other values to 0.

The final mask we can use to count craters.

There are plenty of creative ways to work with this mask now. To count the craters, I used a simple blob detection method that is provided by OpenCV. The detector was with a low inertia ratio and maximum area parameters. This will allow us to capture craters that are slightly elongated but limit the size of the blobs we consider to be a crater.

Results

We can see the mask overlaid on the original image below. The blobs that the detection algorithm marked as craters are outlined in a red circle. For the most part, I’m impressed by how well this method did. The process counted a total of 702 craters from the image segment we took.

There are clear weaknesses to this approach. We can see in the below segment of the results, the method fails to detect conjoined craters. These are blasts that happened close to one another and overlap. The method also fails to pick up craters that didn’t expose the dirt beneath. I’m not familiar with explosion dynamics to understand why this would happen, yet we can see in the bottom right corner of the below image a white crater that is not covered by the mask at all.

Applying this method on the other side of the image, We see similarly successful results. Another 839 craters are counted

While I was skeptical I would get decent results running this approach across the whole image, the algorithm did deliver! When run across the entire image with no spatial transformations, we detect 1778 craters. Considering the results of the two previous isolated segments, this number seems to be in the right ball park.

Improvements & Other Ideas

There are so many more improvements and projects that could be derived from the process and image. Unfortunately, due to my limited time capacity and man power I am unable to pursue them. I will list some of these ideas below and encourage others to run with them. I’d also be interested in hearing some other ideas in the comments below.

  1. More Images! — I very much catered this method to a single image. If you ran the code I wrote on a completely different image in Ukraine, there is nothing to suggest it would produce as pristine results. The obvious next step would be to apply this approach to other samples.
  2. Iterative Processing — We can take the developed process, but run it across smaller segments of the image. For example, we can evenly break up the image into 10th’s and run the algorithm for more accurate results. Such a method would better adjust for regional noise.
  3. Spatial Frequency Analysis — With the derived mask, we can take a 2D Fourier transform of the binary splotches and do some visual analysis in the frequency domain. If the bombardments follow any pattern, this analysis could illuminate what the patterns are.
  4. Data Scraping — The coolest next step I see from this process is data scraping. We have located the positions of over 1500 craters in this image with our approach. We can now bound each of these positions with, say, a 50x50 pixel box and create a crater data set.
  5. Deep Learning & Smarter Solutions — Using the scraped data from the suggestion above, we can attempt to use such data to train a more sophisticated neural network to detect craters. Such an approach would likely need more diverse data from other images, but it could potentially outperform our results in this article.

“Слава Україні!”

T.Y.C

Code

--

--

Tim Chinenov

A SpaceX software engineer. Im an equal opportunity critic that writes about tech and policy. instagram: @classy.tim.writes