Photo by Janusz Maniak on Unsplash
TL;DR
Using high-resolution satellite images from the Amazon rainforest and a good ol’ResNet [1] gives us promising results of > 95% accuracy in detecting deforestation-related land scenes, with interesting results also when applied to other areas of the world. We also show a proof-of-concept of how a real deforestation detection tool could look like, including a dashboard made with Streamlit [2] and data infrastructure on Google Cloud [3]. In this article, you can follow along with these insights and the journey throughout this project.
Why
This time I’ll start with the why
(a probably worn-out reference to one of the great TED talks by Simon Sinek [4] and a wink to all the data science enthusiasts, and my Mom, who read my previous article [5]).
It probably doesn’t come as a surprise to you that we’re facing climate change, a crisis which is already affecting our lives and, unless acted upon in a swift and decisive manner, could get us all into big trouble. Furthermore, this is largely caused by human activity, with the release of greenhouse gases, which both means that we’re at fault for this but also that we could potentially fix it. If you’re still skeptical about this, there are many resources that you can look into, but I’d recommend reading Tomorrow’s guide on climate change [6] and Bill Gates’ latest book “How to Avoid a Climate Disaster” [7].
One of the contributors to climate change is deforestation. When cutting trees and plants, we are also cutting natural carbon sinks, i.e. one of the few parts of nature that still seems to want to help us even after all the crap that we put onto her. Besides getting fewer CO2-capturing trees, we are also disrupting the local ecosystem, with potential complications down the line on wildlife, food sources, and water reserves, as well as potentially filling that now emptied area with more polluting factors such as cattle or fossil fuel-powered facilities. And despite all of these disadvantages, we’re facing an increase in deforestation in some parts of the world, including in Brazil, where 2020’s deforestation rate was the highest of the previous decade [8].
When faced with these concerns, we might be tempted to fall into despair
No need to stress out too much there Ned.
or denial.
Not a moment to stand still either.
However, while we must remain aware of the severity of the topic, we should also be stubborn optimists [9], in the sense of believing that we can overcome this adversity and actually do something about it.
We got this!
Fortunately, there are several technological innovations that can help us on reducing our carbon footprint. While the first one that comes to our minds might be renewable energy tech, I’d say satellites can be of great use as well. While they don’t reduce emissions on their own, satellites can inform us of our progress and guide our action plan. Getting unbiased, global insights on a frequent basis might be crucial to set priorities and to move forward with policies that have a sound base. In that sense, their importance is being recognized, as we see more and better remote sensing satellites being launched into space by the likes of Planet [10], Satellite Vu [11], and Satellogic [12], as well as more organizations using them for sustainability purposes, such as Climate TRACE [13], TransitionZero [14], and Carbon Mapper [15].
Satellite images on their own are not that practical, as we can’t really just hire some person to run a kind of CCTV surveillance on the entirety of Earth. But we can automate the extraction of insights through a machine learning pipeline. We’re in luck in this part too as a lot of progress has been made in computer vision over the last years and, with all the satellite data that we’re collecting, it’s also starting to become more and more an enticing machine learning field.
As if all of this wasn’t enough motivation, this started as a group project between me and Karthik Bhaskar for the Full Stack Deep Learning — Spring 2021 online course. So, we also have the curiosity to work on a machine learning project in a somewhat complete manner, from start to end.
What
The plan
The project proposal, with the initial plan.
To start off the project, we set up a project management workspace in Notion [16], where we could manage our tasks and have a centralized knowledge base with all our notes and relevant links of our project (which comes in handy in this moment of writing an article about it
). You can actually check those pages [17], as we made everything public.
Through this platform, we proceed to discuss our motivations, what we wanted to achieve, and possible project ideas. As you can see from the previous section, the topic of deforestation was very interesting for us and, since we found high-quality datasets for this task (as you’ll see later), it seemed like a feasible project. After some chat and video calls, we got to define our MVP (Minimum Viable Project), i.e. the base features of what we wanted to achieve, and some extra objectives.
The MVP
For the MVP, we decided on the following items:
Model trained that has acceptable performance on detecting deforestation. (What would be “acceptable performance” was determined by comparing with baselines)
Dashboard to show the predictions on our dataset.
GitHub package.
Hyperparameter tuning. (i.e. use optimization tools to improve our model)
Data storage. (i.e. set a robust data storage solution for our project’s needs)
We felt like these minimal goals would already fulfill our desire to work on a machine learning project in an end-to-end way, covering many of the important aspects from coding, data engineering, modeling, and deployment.
The bonuses
As for bonuses, as in things that would be nice to have if time and availability allowed for it, we defined these:
Out-of-domain / distribution shift analysis — study our model’s performance on data that’s different from what it was trained on.
Data flywheel — set up a system where new data is regularly collected to improve the model, which can be done with help from users.
Interpretability in the dashboard — allow users to better understand how the model derived each output.
Tests for our package.
Model with improved performance over the MVP one.
While all of these additional milestones were interesting to us, we were particularly interested in doing some out-of-domain performance analysis, considering the nice validation that it could give of our model and the availability of multiple deforestation-related datasets, and the data flywheel [18]. This later has been talked about a lot lately, including from the likes of Andrej Karpathy and his project vacation [19], and it does seem to be a way forward in AI, as a sort of automation of the automation, which can help both at a later stage in machine learning, as it gets harder and harder to further improve models, and in an initial step where we might not have enough data for training good enough models.
How
Datasets
In order to train a model that can detect deforestation from space, we need some labeled data, consisting of satellite images and labels that should be related to the presence or absence of deforestation. While, as we discussed, there’s a growing volume of satellite imagery, labeled data is still relatively small. And if we only search for a dataset that looks exactly like what we aim for, such as one where each image has a single, binary deforestation label, we might find it difficult. So first of all, in order to widen our possible solutions, we need to define what is deforestation, at least in terms of visual signals from above. Having said that, we decided to search for datasets that had any labeling of logging or potentially recent human development in natural ecosystems, with activities such as agriculture, mining, and urban expansion.
With the criteria set, our search led us to these three datasets:
Understanding the Amazon from Space [20] — multilabel dataset to track the human footprint in the Amazon rainforest; we’ll mostly refer to this as the Amazon dataset.
WiDS Datathon 2019 [21] — binary dataset for oil palm plantation detection in Borneo; we’ll mostly refer to this as the oil palm dataset.
Towards Detecting Deforestation [22] — binary dataset for detecting coffee plantations in the Amazon rainforest; we’ll mostly refer to this as the coffee dataset.
Our main focus went to the Amazon dataset, but you’ll find out why in the next section.
EDA & Data processing
As usual in data science, before delving into modeling, we started with an Exploratory Data Analysis (EDA), which can guide us in defining the problem, potential solutions, as well as the preprocessing steps that we need before training a model. So let’s analyze here each of the three potential datasets and what we decided to do on which one.
The Amazon dataset was one that immediately stood out for us, given its size and quality. Without counting with the test set for which Kaggle doesn’t share the labels, here are over 40k samples with 17 independent labels, ranging from agriculture to mining, urban infrastructure, natural landscapes, and weather. The size and the contextually relevant labels make this dataset a good candidate for model training.
It contains samples from all over the Amazon rainforest, including Brazil, Peru, Uruguay, Colombia, Venezuela, Guyana, Bolivia, and Ecuador, collected between January 2016 and February 2017. All images come from Planet satellites, with a 3 meters spatial resolution (each pixel represents 3 meters on the ground). This information is relevant not just because it gives us a sense of diversity in the dataset, both spatially and temporally, but also because it might prove useful if we want to compare a model trained on it to out-of-domain data (potentially in a different time and/or place and/or satellite).
The images are given in two formats: the typical RGB images and TIFF files that have an additional near-infrared band. While the TIFF files could be useful, given the additional band, we decided to stick with the RGB images given their simplicity (all images are already preprocessed to have pixel values between 0 and 255) and for flexibility in handling other images, both from other datasets and from users. Either way, another simplifying feature of this dataset is that all images have the same 256 x 256 size.
Approximate distribution of pixel values in the Amazon dataset.
As it’s unlikely for other datasets to have the same labeling approach, and as we’re interested in highlighting deforestation events and risks, we mapped relevant tags as deforestation-related, as you can see below.
All of the Amazon dataset’s tags and the mapping of those related to deforestation.
It’s probably not worth it to consider habitation as a deforestation label. The samples that have this label seem to be mostly cities or villages that exist since some time ago, so they might not be relevant for detecting current deforestation.
Label distribution in the Amazon dataset. Remember that each label is independent of the others, except for the weather ones.
Most of the deforestation-related labels (except agriculture, road and cultivation) seem to be rare (i.e. < 1k samples). But combining them all together, we could still get a fair amount of positive samples, resulting in 15719 samples with deforestation signals, representing around 38.83% of the labeled data → somewhat balanced dataset
The oil palm dataset was a curious one for us. It has quite a few similarities with the Amazon one, including using the same satellite, the same image format and resolution, as well as a similar purpose of detecting deforestation. But it also had some significant differences, mainly in the sense that it has a narrower focus, with a single label for detecting oil palm plantations, and it’s in a different part of the world, with all images coming from the Southeast Asian island of Borneo, instead of South America’s Amazon rainforest. It also currently has only around 2k labeled images, considerably less than the Amazon dataset’s 40k, mainly because of an existing issue with a mismatch between the labels and the image files [23].
Approximate distribution of pixel values in the oil palm dataset. Notice how here the peaks of the plots are shifted to the left, to lower pixel values, when compared to the Amazon dataset.
Despite its problems, the imaging similarity and the differences in data distribution make the oil palm dataset a good option for an out-of-domain test set. Plus, the dataset’s label has_oilpalm is perfectly balanced at 1089:1089 off and on samples.
While the coffee dataset had the potential to be at least one more out-of-domain dataset like the oil palm one, also using the same satellites and representing a binary task of plantations detection, it has far too many issues to be practical in our project. The authors have degraded the image resolution and converted them to grayscale, which are limitations that are incompatible as is with the other datasets and we’d likely have to sacrifice the model performance a lot to standardize all the images to these restrictions. As such, we discarded this coffee dataset.
So, as a recap, we’re using the Amazon dataset as the main one, where we train our models on, and the oil palm dataset as an additional test set, with out-of-domain data.
Modeling
As suggested throughout the Full Stack Deep Learning course, it’s important to have baselines so as to know what to expect and what to compare to when developing our models. In our case, it’s easy to find baselines as we are training our model in a dataset that was used in a Kaggle competition, i.e. we have several submissions to compare to. Knowing this, it’s important first of all to make sure that we calculate the same performance metric that was used in the competition, so that we can adequately compare our solution versus the leaderboard. The metric that’s used in the Amazon dataset is the F2 score, a metric that combines precision and recall, although assigning a higher weight to the recall. This means that, in order for our model to score well, one of the main criteria is for it to have few false negatives. Considering our interest in detecting deforestation, the low risk of false positives, and the fact that samples with deforestation are rarer than those without it, this metric also seems to make sense here.
The F2 score, where a higher weight is given to recall, i.e. to having few false negatives.
To actually develop the model, as we were working in a group, we started with two distinct approaches. One of us tried training the model on TensorFlow [24] and the other on FastAI [25]. For the TensorFlow approach, we tried a more robust pipeline, with a notebook to first test the model on overfitting to a mini-batch (as suggest along the course, which seems like a good idea to confirm that the model works as expected on the given data), model tracking through Weights & Biases, options to use models that were pre-trained on other remote sensing datasets [26], early stopping and automated learning rate decay based on validation performance, among other features nested inside a neat training script. However, FastAI proved to get us to better results, even without all the bells and whistles of the other direction. As such, we focused on using the FastAI model.
Sometimes the simpler solution is the best solution.
So what did we do for the FastAI model? It just took these simple steps:
Set FastAI dataloaders with a batch size of 256, image resize to 128 by 128 and a couple of simple data augmentations, consisting of image flips, brightness, zooming, and rotations.
Set accuracy and the F2 score as the metrics.
Define a ResNet50 model.
Search for an optimal initial learning rate.
Train the model starting with the best learning rate fixed for 4 epochs, then train for 6 more epochs with a decaying learning rate.
The core part of training our FastAI model. You can see the rest of the code and try it yourself in Colab [28].
As you can see, most of the modeling steps are taken care of by FastAI, which makes our code short and the development process fast (I wonder where they got their name from
). This does have its shortcomings, as it locks away some flexibility and customization that other, less abstract frameworks provide. But it will do for now.
Notice also how we chose the ResNet50 model. While this architecture is from 2015, which can already be seen as old in the modern fast-paced research world, its efficiency and wide community support have made it very practical to use, at least as an initial iteration in computer vision tasks. More recent models like the ViT [27] and insert whatever model is state of the art today show better performance on large datasets, but their very large size can make them unusable for smaller datasets and/or personal hardware setups, besides sometimes just not having the model weights and/or the code available publicly.
As you can see in our notebook [28], our model achieved an F2 score of 92.7% and an accuracy of 95.6% on the validation set. Looking good on their own, these results also make us happy when we see that the top of the leaderboard on Kaggle got 93.3% F2 score. It’s still higher than ours and on a slightly different set, as it’s calculated on Kaggle’s test set (we didn’t do a submission). But still, this performance seemed good enough for us to move on from modeling.
I’d also like to point out that, when running inference on the oil palm dataset and calculating the scores as a binary deforestation task (using the deforestation mapping that we discussed before), we got an accuracy of 66.8% and an F2 score of 86.7%. We did expect a worse performance than in the dataset where the model was trained, but this still shows some interesting insights. First of all, the model must have very few false negatives, to have such a high F2 score with a lower accuracy, which means that it probably can still point out deforestation events. On the other side, these results also tell us that the model has a lot of false positives. This can be caused by several factors, including the regional differences between the datasets and perhaps a slightly different image preprocessing. We also already saw how this oil palm dataset’s images tend to have lower pixel values than in the Amazon dataset, which can potentially trick our model into thinking that samples are positive. We could try several options to improve our model’s performance on this dataset, such as standardizing the pixel distributions on both datasets or including some samples from this dataset in training, but we’ll leave it as it is for now.
Dashboard
We wanted to have a dashboard by the end of the project, so that we could have an interactive demo that combined all the pieces and serve as a simple deployment step in our end-to-end project. We decided to use Streamlit for the dashboard development, given its ease of use and our interest in learning it. But before diving into coding, we needed to look at the full picture and think of what we’d want our dashboard to be like. For its core functionalities, we wanted to include:
User input — the option for the users to run our model on their images.
Aggregate performance — show the global results of our model on the Amazon dataset.
Sample exploration — show the model results over each sample.
Out-of-domain performance — show the model results on the oil palms dataset.
(bonus) Data flywheel — collect user data that can help us improve our model.
As we’re developing a dashboard, which we want to be intuitive and visually appealing, we should also think beforehand about the UI itself. So, we thought of two pages and sketched their drafts:
Playground — this is the initial page made specifically just for the users to try the model on their images; we can also use this page to store the users’ uploaded images and their feedback.
Overview — a page for reviewing our model’s performance and the data’s characteristics over our datasets.
The draft for the playground (left) and the overview (right) pages of the dashboard. This featured the AUC metric instead of the F2 score, which was then replaced as we prioritized that metric.
Notice that we follow the typical design guideline of Streamlit apps, in that the page content has all the results and plots, while the user inputs are set in the sidebar.
These drafts differ from the final dashboard in a couple of details, but I’d like to just highlight that, contrary to what the drafts suggest, we just use a single model, which was trained to predict the 17 categories from the Amazon dataset. So here is the final dashboard [29]:
The playground page on the final dashboard.
The overview page on the final dashboard.
These look similar to the drafts, right?
We also have some additional features that I’d like to point out:
Several info cards to guide the user through the dashboard and make it more intuitive.
The outputs are given in green color if they aren’t related to deforestation, and in red otherwise.
A simple data flywheel is implemented, as we can save the images that users give us, as well as the model’s output on them and the users’ feedback on it. We also give the users a button to automatically delete every data that we stored from that interaction.
Caching and filtering to a subset of data are used to improve the app’s performance.
The dashboard is fully deployed and ready to be explored by everyone, as we relied on the Streamlit Sharing service [30] to easily publish it.
Data storage
For the deployment of the dashboard, we wanted it to be as practical as possible for anyone to try using it. Being practical means that we should avoid forcing everyone to download the full datasets to their computers just to try it. And as we moved for the Streamlit Sharing direction, being practical also meant that we wouldn’t just try to put all the datasets in our GitHub repository. The most practical solution is then to have our data stored in the cloud, with the dashboard accessing only what it needs, when it needs it, through connections to our databases. And as we wanted to collect data from users, we would need to have a centralized database anyway, so might as well include everything in our data infrastructure.
Considering our project, our data needs were:
Store the label tables, i.e. the dataframes that associate labels to each image.
Store all the images from our datasets.
Store the users’ data, which included their images, the model’s output on them, the users’ feedback, and additional data like the timestamp and unique identifiers.
We found that Google Cloud fulfilled all these requirements, as we could organize our data in the following way:
Store the label tables in BigQuery tables.
Amazon dataset’s labels stored in a BigQuery table.
Store the images in Google Cloud Storage buckets.
Amazon dataset’s images stored in Google Cloud Storage buckets.
Store the users’ tabular data in BigQuery and their images in buckets.
Users’ data stored in a BigQuery table. The images are stored in a separate bucket, but they’re linked to this table through the image_id column.
Besides having the tools needed to store the data, Google Cloud also has a nice integration in Python, which makes it easy to read and write data.
Examples of loading data and uploading data to Google Cloud from Python.
While this data infrastructure is managed by us, it can be updated on its own as users upload and delete their data, as was shown in the previous section on the dashboard. As such, we needed to make sure that the dashboard had the credentials for connecting to our databases without revealing them publicly. Fortunately, through Streamlit’s secrets management [31], that’s easy to do.
Final result
The video from the final project submission.
After going through all of what we’ve done, let’s go back to what we set out to do and see what we managed to achieve.
From the MVP:
Model trained that has acceptable performance on detecting deforestation
Trained a model with 95.6% accuracy and 92.7% F2 score on a deforestation-related classification task, with performance comparable to the top of the leaderboard in the Kaggle challenge.
Dashboard to show the predictions on our dataset
Developed a dashboard with Streamlit that not only shows an overview of the data and model results, but also allows for user inputs.
GitHub package
Shared all the code and the model in our GitHub repository [32].
Hyperparameter tuning
Performed a brief tuning of the learning rate.
Data storage
Set up a Google Cloud workspace to handle all our data needs.
So all the basics were fulfilled
What about the bonuses?
Out-of-domain / distribution shift analysis
Analyzed our model’s performance on the oil palm dataset.
Data flywheel
Set a database and the UI to allow the collection of inputs from users, which could be used to further improve the model.
Interpretability in the dashboard
Couldn’t run SHAP [33], a popular interpretability library, on our FastAI model.
Tests for our package
Didn’t integrate any custom tests, neither for code, nor model nor the data.
Model with improved performance over the MVP one
Didn’t iterate more on the model; however, it did already have an impressive performance on the Amazon dataset.
We also accomplished two of our bonuses!
Yay for moderate success!
But before heading for the celebratory party, we should have a bit more introspection on what happened and what might be the next steps.
Lessons learned
Awesome wins
ResNet’s great results even today — it never ceases to amaze me how easy it can be to get great results with a simple ResNet model; at least for not very complex tasks, the trick seems to be more and more dependent on data quality rather than model quality.
Streamlit’ simplicity and usefulness — as a first contact with this tool, it was incredibly easy to learn and proved to be a fast process from idealizing a dashboard, to creating it and even deploying it.
Google Cloud‘s completeness — at least in terms of data storage, Google Cloud really seems to have solutions for most use cases, and with good Python integration.
Insightful fails
FastAI, its limitations, and the dangers of delving into uncharted territory — this was another framework that I personally hadn’t used before, although Karthik, my teammate, had; contrary to most of the other tools, this one gave us some difficulties, particularly with its low flexibility, confusing documentation and comparably low community support (e.g. couldn’t find a way to make it work with SHAP).
The overly ambitious adventures with TensorFlow — we still spent quite a while trying to develop a robust model training pipeline in TensorFlow, only to see FastAI getting us much better results, much faster; might need to reconsider modeling priorities and its timeline better next time.
Streamlit’s oversimplified script runs — as with FastAI, Streamlit’s simplicity can be a disadvantage; as Streamlit reruns the entire script every time something changes in the dashboard, such as user input, it can be hard to make it as speedy as desired and to prevent it from crashing; for complex apps, we need to pay attention to caching and performance tricks.
Unmaintained datasets — we faced some issues with apparently incorrect or outdated files in the oil palm dataset and never managed to get a response from the authors.
Sickness, broken laptops and other tales from hell — life is unpredictable and Murphy’s law is evil; although we managed to achieve most of what we wanted to do and still believe in our project planning mindset, we should consider discussing in more detail the worst case scenarios, prepare plan Bs and perhaps readjust our estimates of time taken to complete certain tasks.
Future work
There are several aspects of this project that could be improved upon. In our opinion, the main ones would be these:
Include interpretability analysis with SHAP — adding interpretability can help with understanding the model’s decisions, which in turn can help us debug it, gain trust in it and make for a more pleasant experience.
Improve our model, potentially with more data and hyperparameter tuning — we still didn’t get the best performance of the leaderboard and metrics from the out-of-domain dataset weren’t very good.
Implement proper model tracking, such as through Weights & Biases [34] — as new models are trained, logging experiments and comparing them becomes more important.
Add code, model and data testing — this feels crucial in a longer-term, more mature project, to be sure that everything works as expected along further development and usage; the Full Stack Deep Learning course has a great lecture on this [35].
Make the dashboard faster and more performant — the dashboard still feels slow sometimes and we had to make some sacrifices to be sure that it’s practical and doesn’t crash; we could look more into this.
References
[1] Kaiming He et al., Deep Residual Learning for Image Recognition (2015)
[2] Streamlit
[3] Google Cloud
[4] Simon Sinek, How Great Leaders Inspire Action (2009), TED Talks
[5] André Ferreira, Interpreting Recurrent Neural Networks on Multivariate Time Series (2019), Towards Data Science
[6] Olivier Corradi, Climate Change — A Pragmatic Guide (2020), Tomorrow Blog
[7] Bill Gates, How to Avoid a Climate Disaster (2021), Alfred A. Knopf
[8] Celso H. L. Silva Junior et al., The Brazilian Amazon Deforestation Rate in 2020 is The Greatest of The Decade (2020), Nature
[9] Christiana Figueres, The Case for Stubborn Optimism on Climate (2020), TED Talks
[10] Planet
[11] Satellite Vu
[12] Satellogic
[13] Climate TRACE
[14] TransitionZero
[15] Carbon Mapper
[16] Notion
[18] Josh Tobin et al., Lecture 5: ML Projects (2021), Full Stack Deep Learning — Spring 2021
[19] Andrej Karpathy, PyTorch at Tesla (2019), PyTorch DevCon 2019
[20] Planet: Understanding the Amazon from Space (2017), Kaggle
[21] WiDS Datathon 2019 (2019), Kaggle
[22] Towards Detecting Deforestation (2020), Kaggle
[23] CP_Padubidri, Dataset — Mismatch (2021), Kaggle
[24] TensorFlow
[25] FastAI
[27] Alexey Dosovitskiy et al., An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (2021),
[28] Karthik Bhaskar, FSDL_Final_Model Notebook (2021), Colab
[29] André Ferreira, Project dashboard (2021), Streamlit
[30] Adrien Treuille, Introducing Streamlit Sharing (2020), Streamlit Blog
[31] James Thompson, Add Secrets to Your Streamlit Apps (2021), Streamlit Blog
[32] André Ferreira and Karthik Bhaskar, Project Repository (2021), GitHub
[33] Scott Lundberg et al., A Unified Approach to Interpreting Model Predictions (2017), NIPS 2017
[34] Weights & Biases
[35] Josh Tobin et al., Lecture 10: Testing & Explainability (2021), Full Stack Deep Learning — Spring 2021


















