First Steps: AI for Animal Advocacy with Fastai š
blog
fastai
deeplearning
Author
Evan Lesmez
Published
May 12, 2023
Code
from PIL import Imagesmol_forest_guardian = Image.open("./DALLĀ·E-digital_art_cute_solarpunk_forest_guardian_robot.png")display( smol_forest_guardian.resize((400, 400)),"Solar punk forest guardian source: DALLEĀ·2",)
'Solar punk forest guardian source: DALLEĀ·2'
This charming little forest robot was created using OpenAIās DALLĀ·E 2 model, based on my prompt: āDigital art cute solarpunk forest guardian robotā.
This image represents an idea Iāve been interested in for a long time. Iām not certain where it all started, but I think it goes back to my childhood. Thatās when my aunt introduced me to my first Miyazaki movies: āMy Neighbor Totoroā, āSpirited Awayā, and āCastle in the Skyā.
Code
big_guardian = Image.open("./castle_in_sky_guardian.jpg")ratio =0.33new_dimens = (round(big_guardian.width * ratio), round(big_guardian.height * ratio))display( big_guardian.resize(new_dimens),"Castle in the Sky Guardian source: https://characterdesignreferences.com/art-of-animation-9/art-of-castle-in-the-sky",)
'Castle in the Sky Guardian source: https://characterdesignreferences.com/art-of-animation-9/art-of-castle-in-the-sky'
Most Hayao Miyazaki fans, I believe, can relate to the sense of awe they feel when watching his films. He has a unique ability to instill a profound respect for nature, capturing its simple beauty and serving as a daily reminder of how much we often take it for granted. His films advocate for the protection of nature from human exploitation and emphasize the importance of reconnecting with the world around us. The Laputian robot from āCastle in the Skyā, depicted above, stands as a compelling example of this message.
šā Spoiler alert: Click here if youāre okay with that This robot, initially introduced as a deadly weapon designed to protect the royal family of Laputa (the castle in the sky), demonstrates overwhelming destructive power at various points throughout the film. However, a contrasting image is portrayed within the Castle itself, where we see the same model of robot tending gardens, befriending animals, and existing harmoniously with nature. This stark dichotomy serves as a potent caution for the evolution of technologies, such as Artificial Intelligence (AI). The choice is ours: Will we opt for peace or plunge into chaos?
A few years ago, perhaps inspired by Miyazakiās works, I realized my mission: to develop technology that champions the rights of non-human animals and safeguards our shared ecosystems. I envision a future where AI not only respects nature more deeply than humans currently do, but also unravels its secrets that remain undiscovered.
To take steps towards this goal, I am embarking on a journey to learn about deep learning, one of the most promising fields within AI. This blog will serve as a record of my progress, where Iāll document my practice and share related ideas, lessons, and questions that arise along the way.
Fastai is a vibrant community of deep learning enthusiasts, dedicated to making AI accessible to all. Iām currently going through their āPractical Deep Learning for Codersā course, which has been fantastic thus far!
Iād highly recommend this course to anyone with even a hint of programming experience whoās curious about AI. This is particularly true if youāre in an industry where AI development is still in its infancy - there could be a significant opportunity waiting.
Surprising Discoveries (so far)
I was astounded by the speed at which I could train and deploy my first model - all within a few weeks of learning.
Transfer learning is a technique that involves taking a pre-trained model with expert-determined weights and fine-tuning it with your specific data.
This strategy allows you to quickly implement a functioning model, without the need to start from scratch each time. As an example, I trained a simple greenhouse/hydroponic plant health classifier using a pre-trained image classifier model based on the ResNet18 architecture. This was a problem a previous company I worked at was trying to solve, so I thought it would be a fun challenge to undertake myself.
from fastai.vision.allimport*from fastai.vision.widgets import*# ... create a labeled image datablock and visualizehydro_dblock = DataBlock( blocks=(ImageBlock, CategoryBlock), get_items=get_image_files, splitter=RandomSplitter(valid_pct=0.2, seed=42), get_y=parent_label, item_tfms=Resize(128),)dls = hydro_dblock.dataloaders(path)dls.valid.show_batch(max_n=8, nrows=2, figsize=(8, 5))
# ... use a pretrained learner and fine tunelearn = vision_learner(dls, resnet18, metrics=error_rate)learn.fine_tune(4)
Overcoming Initial Fears in Deep Learning
Before diving into the world of deep learning, I was somewhat daunted by the complexity I feared training and deploying a model would entail. Iām neither a math whiz nor a master coder, but I found many of the initial concepts far more intuitive than Iād anticipated.
For instance, the practice of maintaining a separate training set of data from a validation set (and a test set) seemed quite logical. The training set provides the model with a foundational understanding of correct answers, like labeled images. The validation set then serves as a quiz for your model, checking its comprehension of the patterns it has learned. In the context of an image classifier, the model must guess which label best matches a given image from the validation set, and then evaluate the confidence level of its correctness or error. This process facilitates the modelās improvement with each āepochā or training cycle. Additionally, a completely separate test set, kept hidden from the model, can be used by humans to assess the modelās performance after training is completed.
Separating a robust validation set (and test set) helps to prevent overfitting the model to images present only in the training set. Overfitting can render models unreliable for new images encountered outside the ālabā setting.
For instance, if youāre building a cat breed classifier and include numerous images of the same orange cat perched on its cat tower in both the training and validation sets, the model might overfit for that particular scenario.
Another concept I found intuitive and valuable is the confusion matrix. The confusion matrix helps us visualize which labels the model was āconfusedā by and predicted incorrectly during training. For example, as shown below, the model predicted that a few plants were healthy when they were actually wilted, and vice versa.
We can also plot the top mistakes to visualize the images where the model made incorrect predictions and evaluate the modelās confidence in its decisions. Being confidently wrong is problematic, but so is being correct with low confidence. Both scenarios suggest areas where the model can improve.
In the first case, the model may have ālearnedā incorrect patterns from the training data, leading to high confidence in wrong predictions. In the second case, the modelās lack of confidence, even when correct, could suggest that itās struggling to find clear patterns in the data. These are valuable insights that can guide us in improving the modelās performance.
interp.plot_top_losses(5)
Opportunities for Deep Learning in the Animal and Vegan Advocacy Movement
The Animal and Vegan Advocacy (AVA) movement has a multitude of opportunities to leverage deep learning. Just to name a few:
One of the most intriguing projects Iāve come across in this field is the Earth Species Project. Their goal is to decode non-human communication using Natural Language Processing. The potential to understand the āsecret languagesā animals use could undoubtedly foster more compassion.
Obstacles Faced by the Movement
Non-profit organizations, particularly those advocating for animal rights, often face resource constraints that arenāt an issue for for-profit industries. Even within the landscape of animal non-profits, farmed animal activism receives only a fraction of the donations that shelters do.
Moreover, non-profits frequently lag behind in technology adoption, making it challenging not only to attract talent like Machine Learning engineers, but also to pursue deep learning-enabled projects that have the potential to make a significant impact.
Large animal agriculture enterprises, armed with extensive resources, are using AI to enhance their efficiency, often without considering animal welfare or ecosystem health. Historically, technology has been used to exploit our environment, damaging natural habitats and harming wildlife. If left unchecked, AI could further this trend.
We need to empower compassionate individuals and policymakers to better understand AI. This will ensure its use strikes a healthier balance between technological advancement and nature, rather than exacerbating existing problems.
Thank you for reading, and stay tuned for more posts in the future!
This blog was built with Quarto and Jupyter, allowing me to embed fun, interactive code generated blocks like the one below.
Try hovering over it.
Code
import plotly.graph_objects as goimport plotly.offline as pyo# Set notebook mode to work in offlinepyo.init_notebook_mode()import numpy as np# Helix equationt = np.linspace(0, 20, 100)x, y, z = np.cos(t), np.sin(t), tfig = go.Figure( data=[ go.Scatter3d( x=x, y=y, z=z, mode="markers", marker=dict(size=12, color=z, colorscale="spectral", opacity=0.8), ) ])fig.update_layout(margin=dict(l=0, r=0, b=0, t=0), width=640, height=640)fig.show()