Vertebrate Detection¶
Vertebrate : An animal of a large group distinguished by the possession of a backbone or spinal column, including mammals, birds, reptiles, amphibians, and fishes.
Vertebrates available : 'Bear', 'Brown bear', 'Bull', 'Butterfly', 'Camel', 'Canary', 'Caterpillar', 'Cattle', 'Centipede', 'Cheetah', 'Chicken', 'Crab', 'Crocodile', 'Deer', 'Duck', 'Eagle', 'Elephant', 'Fish', 'Fox', 'Frog', 'Giraffe', 'Goat', 'Goldfish', 'Goose', 'Hamster', 'Harbor seal', 'Hedgehog', 'Hippopotamus', 'Horse', 'Jaguar', 'Jellyfish', 'Kangaroo', 'Koala', 'Ladybug', 'Leopard', 'Lion', 'Lizard', 'Lynx', 'Magpie', 'Monkey', 'Moths and butterflies', 'Mouse', 'Mule', 'Ostrich', 'Otter', 'Owl', 'Panda', 'Parrot', 'Penguin', 'Pig', 'Polar bear', 'Rabbit', 'Raccoon', 'Raven', 'Red panda', 'Rhinoceros', 'Scorpion', 'Sea lion', 'Sea turtle', 'Seahorse', 'Shark', 'Sheep', 'Shrimp', 'Snail', 'Snake', 'Sparrow', 'Spider', 'Squid', 'Squirrel', 'Starfish', 'Swan', 'Tick', 'Tiger', 'Tortoise', 'Turkey', 'Turtle', 'Whale', 'Woodpecker', 'Worm', 'Zebra'
!nvidia-smi -L
GPU 0: Tesla T4 (UUID: GPU-9748f974-5561-2fe2-9237-66b6f5079e96)
! mkdir ~/.kaggle
! cp kaggle.json ~/.kaggle/
! chmod 600 ~/.kaggle/kaggle.json
! kaggle datasets download -d antoreepjana/animals-detection-images-dataset
Downloading animals-detection-images-dataset.zip to /content 100% 8.92G/8.92G [03:28<00:00, 50.1MB/s] 100% 8.92G/8.92G [03:28<00:00, 45.8MB/s]
# Get helper functions file
! wget https://raw.githubusercontent.com/Hrushi11/Dogs_VS_Cats/main/helper_functions.py
--2021-07-18 09:00:21-- https://raw.githubusercontent.com/Hrushi11/Dogs_VS_Cats/main/helper_functions.py Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 10139 (9.9K) [text/plain] Saving to: ‘helper_functions.py’ helper_functions.py 100%[===================>] 9.90K --.-KB/s in 0s 2021-07-18 09:00:21 (64.5 MB/s) - ‘helper_functions.py’ saved [10139/10139]
Importing dependancies¶
# Importing dependancies
import os
import random
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow import keras
from tensorflow.keras import layers
from sklearn.metrics import classification_report
from tensorflow.keras.layers.experimental import preprocessing
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from helper_functions import create_tensorboard_callback, plot_loss_curves, unzip_data, compare_historys, walk_through_dir, make_confusion_matrix
Getting Our Data ready¶
# Unzipping the data
unzip_data("/content/animals-detection-images-dataset.zip")
! rm /content/animals-detection-images-dataset.zip
walk_through_dir("/content/train")
There are 80 directories and 0 images in '/content/train'. There are 1 directories and 370 images in '/content/train/Snail'. There are 0 directories and 370 images in '/content/train/Snail/Label'. There are 1 directories and 289 images in '/content/train/Goose'. There are 0 directories and 289 images in '/content/train/Goose/Label'. There are 1 directories and 103 images in '/content/train/Raccoon'. There are 0 directories and 103 images in '/content/train/Raccoon/Label'. There are 1 directories and 313 images in '/content/train/Tiger'. There are 0 directories and 313 images in '/content/train/Tiger/Label'. There are 1 directories and 239 images in '/content/train/Sea turtle'. There are 0 directories and 239 images in '/content/train/Sea turtle/Label'. There are 1 directories and 128 images in '/content/train/Worm'. There are 0 directories and 128 images in '/content/train/Worm/Label'. There are 1 directories and 181 images in '/content/train/Zebra'. There are 0 directories and 181 images in '/content/train/Zebra/Label'. There are 1 directories and 248 images in '/content/train/Starfish'. There are 0 directories and 248 images in '/content/train/Starfish/Label'. There are 1 directories and 113 images in '/content/train/Canary'. There are 0 directories and 113 images in '/content/train/Canary/Label'. There are 1 directories and 1200 images in '/content/train/Lizard'. There are 0 directories and 1200 images in '/content/train/Lizard/Label'. There are 1 directories and 367 images in '/content/train/Squirrel'. There are 0 directories and 367 images in '/content/train/Squirrel/Label'. There are 1 directories and 188 images in '/content/train/Sea lion'. There are 0 directories and 188 images in '/content/train/Sea lion/Label'. There are 1 directories and 132 images in '/content/train/Cheetah'. There are 0 directories and 132 images in '/content/train/Cheetah/Label'. There are 1 directories and 47 images in '/content/train/Bull'. There are 0 directories and 47 images in '/content/train/Bull/Label'. There are 1 directories and 233 images in '/content/train/Swan'. There are 0 directories and 233 images in '/content/train/Swan/Label'. There are 1 directories and 835 images in '/content/train/Fish'. There are 0 directories and 835 images in '/content/train/Fish/Label'. There are 1 directories and 171 images in '/content/train/Woodpecker'. There are 0 directories and 171 images in '/content/train/Woodpecker/Label'. There are 1 directories and 45 images in '/content/train/Red panda'. There are 0 directories and 45 images in '/content/train/Red panda/Label'. There are 1 directories and 108 images in '/content/train/Brown bear'. There are 0 directories and 108 images in '/content/train/Brown bear/Label'. There are 1 directories and 298 images in '/content/train/Giraffe'. There are 0 directories and 298 images in '/content/train/Giraffe/Label'. There are 1 directories and 391 images in '/content/train/Tortoise'. There are 0 directories and 391 images in '/content/train/Tortoise/Label'. There are 1 directories and 91 images in '/content/train/Panda'. There are 0 directories and 91 images in '/content/train/Panda/Label'. There are 1 directories and 155 images in '/content/train/Elephant'. There are 0 directories and 155 images in '/content/train/Elephant/Label'. There are 1 directories and 100 images in '/content/train/Jaguar'. There are 0 directories and 100 images in '/content/train/Jaguar/Label'. There are 1 directories and 194 images in '/content/train/Centipede'. There are 0 directories and 194 images in '/content/train/Centipede/Label'. There are 1 directories and 98 images in '/content/train/Kangaroo'. There are 0 directories and 98 images in '/content/train/Kangaroo/Label'. There are 1 directories and 67 images in '/content/train/Camel'. There are 0 directories and 67 images in '/content/train/Camel/Label'. There are 1 directories and 30 images in '/content/train/Shrimp'. There are 0 directories and 30 images in '/content/train/Shrimp/Label'. There are 1 directories and 76 images in '/content/train/Hippopotamus'. There are 0 directories and 76 images in '/content/train/Hippopotamus/Label'. There are 1 directories and 74 images in '/content/train/Tick'. There are 0 directories and 74 images in '/content/train/Tick/Label'. There are 1 directories and 309 images in '/content/train/Crab'. There are 0 directories and 309 images in '/content/train/Crab/Label'. There are 1 directories and 87 images in '/content/train/Bear'. There are 0 directories and 87 images in '/content/train/Bear/Label'. There are 1 directories and 64 images in '/content/train/Hamster'. There are 0 directories and 64 images in '/content/train/Hamster/Label'. There are 1 directories and 392 images in '/content/train/Ladybug'. There are 0 directories and 392 images in '/content/train/Ladybug/Label'. There are 1 directories and 542 images in '/content/train/Duck'. There are 0 directories and 542 images in '/content/train/Duck/Label'. There are 1 directories and 719 images in '/content/train/Eagle'. There are 0 directories and 719 images in '/content/train/Eagle/Label'. There are 1 directories and 475 images in '/content/train/Sparrow'. There are 0 directories and 475 images in '/content/train/Sparrow/Label'. There are 1 directories and 75 images in '/content/train/Otter'. There are 0 directories and 75 images in '/content/train/Otter/Label'. There are 1 directories and 136 images in '/content/train/Ostrich'. There are 0 directories and 136 images in '/content/train/Ostrich/Label'. There are 1 directories and 494 images in '/content/train/Caterpillar'. There are 0 directories and 494 images in '/content/train/Caterpillar/Label'. There are 1 directories and 240 images in '/content/train/Harbor seal'. There are 0 directories and 240 images in '/content/train/Harbor seal/Label'. There are 1 directories and 62 images in '/content/train/Raven'. There are 0 directories and 62 images in '/content/train/Raven/Label'. There are 1 directories and 61 images in '/content/train/Magpie'. There are 0 directories and 61 images in '/content/train/Magpie/Label'. There are 1 directories and 406 images in '/content/train/Owl'. There are 0 directories and 406 images in '/content/train/Owl/Label'. There are 1 directories and 400 images in '/content/train/Horse'. There are 0 directories and 400 images in '/content/train/Horse/Label'. There are 1 directories and 86 images in '/content/train/Turkey'. There are 0 directories and 86 images in '/content/train/Turkey/Label'. There are 1 directories and 421 images in '/content/train/Parrot'. There are 0 directories and 421 images in '/content/train/Parrot/Label'. There are 1 directories and 208 images in '/content/train/Lion'. There are 0 directories and 208 images in '/content/train/Lion/Label'. There are 1 directories and 588 images in '/content/train/Frog'. There are 0 directories and 588 images in '/content/train/Frog/Label'. There are 1 directories and 80 images in '/content/train/Scorpion'. There are 0 directories and 80 images in '/content/train/Scorpion/Label'. There are 1 directories and 303 images in '/content/train/Shark'. There are 0 directories and 303 images in '/content/train/Shark/Label'. There are 1 directories and 56 images in '/content/train/Koala'. There are 0 directories and 56 images in '/content/train/Koala/Label'. There are 1 directories and 770 images in '/content/train/Monkey'. There are 0 directories and 770 images in '/content/train/Monkey/Label'. There are 1 directories and 70 images in '/content/train/Cattle'. There are 0 directories and 70 images in '/content/train/Cattle/Label'. There are 1 directories and 562 images in '/content/train/Snake'. There are 0 directories and 562 images in '/content/train/Snake/Label'. There are 1 directories and 216 images in '/content/train/Rabbit'. There are 0 directories and 216 images in '/content/train/Rabbit/Label'. There are 1 directories and 80 images in '/content/train/Hedgehog'. There are 0 directories and 80 images in '/content/train/Hedgehog/Label'. There are 1 directories and 327 images in '/content/train/Deer'. There are 0 directories and 327 images in '/content/train/Deer/Label'. There are 1 directories and 80 images in '/content/train/Lynx'. There are 0 directories and 80 images in '/content/train/Lynx/Label'. There are 1 directories and 202 images in '/content/train/Goat'. There are 0 directories and 202 images in '/content/train/Goat/Label'. There are 1 directories and 133 images in '/content/train/Goldfish'. There are 0 directories and 133 images in '/content/train/Goldfish/Label'. There are 1 directories and 24 images in '/content/train/Turtle'. There are 0 directories and 24 images in '/content/train/Turtle/Label'. There are 1 directories and 388 images in '/content/train/Chicken'. There are 0 directories and 388 images in '/content/train/Chicken/Label'. There are 1 directories and 190 images in '/content/train/Pig'. There are 0 directories and 190 images in '/content/train/Pig/Label'. There are 1 directories and 99 images in '/content/train/Sheep'. There are 0 directories and 99 images in '/content/train/Sheep/Label'. There are 1 directories and 15 images in '/content/train/Squid'. There are 0 directories and 15 images in '/content/train/Squid/Label'. There are 1 directories and 229 images in '/content/train/Polar bear'. There are 0 directories and 229 images in '/content/train/Polar bear/Label'. There are 1 directories and 123 images in '/content/train/Leopard'. There are 0 directories and 123 images in '/content/train/Leopard/Label'. There are 1 directories and 377 images in '/content/train/Penguin'. There are 0 directories and 377 images in '/content/train/Penguin/Label'. There are 1 directories and 1429 images in '/content/train/Moths and butterflies'. There are 0 directories and 1429 images in '/content/train/Moths and butterflies/Label'. There are 1 directories and 148 images in '/content/train/Fox'. There are 0 directories and 148 images in '/content/train/Fox/Label'. There are 1 directories and 7 images in '/content/train/Seahorse'. There are 0 directories and 7 images in '/content/train/Seahorse/Label'. There are 1 directories and 856 images in '/content/train/Spider'. There are 0 directories and 856 images in '/content/train/Spider/Label'. There are 1 directories and 151 images in '/content/train/Mouse'. There are 0 directories and 151 images in '/content/train/Mouse/Label'. There are 1 directories and 287 images in '/content/train/Whale'. There are 0 directories and 287 images in '/content/train/Whale/Label'. There are 1 directories and 1875 images in '/content/train/Butterfly'. There are 0 directories and 1875 images in '/content/train/Butterfly/Label'. There are 1 directories and 457 images in '/content/train/Jellyfish'. There are 0 directories and 457 images in '/content/train/Jellyfish/Label'. There are 1 directories and 61 images in '/content/train/Mule'. There are 0 directories and 61 images in '/content/train/Mule/Label'. There are 1 directories and 108 images in '/content/train/Crocodile'. There are 0 directories and 108 images in '/content/train/Crocodile/Label'. There are 1 directories and 214 images in '/content/train/Rhinoceros'. There are 0 directories and 214 images in '/content/train/Rhinoceros/Label'.
walk_through_dir("/content/test")
There are 80 directories and 0 images in '/content/test'. There are 1 directories and 114 images in '/content/test/Snail'. There are 0 directories and 114 images in '/content/test/Snail/Label'. There are 1 directories and 33 images in '/content/test/Goose'. There are 0 directories and 33 images in '/content/test/Goose/Label'. There are 1 directories and 51 images in '/content/test/Raccoon'. There are 0 directories and 51 images in '/content/test/Raccoon/Label'. There are 1 directories and 26 images in '/content/test/Tiger'. There are 0 directories and 26 images in '/content/test/Tiger/Label'. There are 1 directories and 87 images in '/content/test/Sea turtle'. There are 0 directories and 87 images in '/content/test/Sea turtle/Label'. There are 1 directories and 15 images in '/content/test/Worm'. There are 0 directories and 15 images in '/content/test/Worm/Label'. There are 1 directories and 31 images in '/content/test/Zebra'. There are 0 directories and 31 images in '/content/test/Zebra/Label'. There are 1 directories and 55 images in '/content/test/Starfish'. There are 0 directories and 55 images in '/content/test/Starfish/Label'. There are 1 directories and 16 images in '/content/test/Canary'. There are 0 directories and 16 images in '/content/test/Canary/Label'. There are 1 directories and 260 images in '/content/test/Lizard'. There are 0 directories and 260 images in '/content/test/Lizard/Label'. There are 1 directories and 68 images in '/content/test/Squirrel'. There are 0 directories and 68 images in '/content/test/Squirrel/Label'. There are 1 directories and 48 images in '/content/test/Sea lion'. There are 0 directories and 48 images in '/content/test/Sea lion/Label'. There are 1 directories and 35 images in '/content/test/Cheetah'. There are 0 directories and 35 images in '/content/test/Cheetah/Label'. There are 1 directories and 73 images in '/content/test/Bull'. There are 0 directories and 73 images in '/content/test/Bull/Label'. There are 1 directories and 64 images in '/content/test/Swan'. There are 0 directories and 64 images in '/content/test/Swan/Label'. There are 1 directories and 617 images in '/content/test/Fish'. There are 0 directories and 617 images in '/content/test/Fish/Label'. There are 1 directories and 32 images in '/content/test/Woodpecker'. There are 0 directories and 32 images in '/content/test/Woodpecker/Label'. There are 1 directories and 42 images in '/content/test/Red panda'. There are 0 directories and 42 images in '/content/test/Red panda/Label'. There are 1 directories and 39 images in '/content/test/Brown bear'. There are 0 directories and 39 images in '/content/test/Brown bear/Label'. There are 1 directories and 23 images in '/content/test/Giraffe'. There are 0 directories and 23 images in '/content/test/Giraffe/Label'. There are 1 directories and 107 images in '/content/test/Tortoise'. There are 0 directories and 107 images in '/content/test/Tortoise/Label'. There are 1 directories and 19 images in '/content/test/Panda'. There are 0 directories and 19 images in '/content/test/Panda/Label'. There are 1 directories and 33 images in '/content/test/Elephant'. There are 0 directories and 33 images in '/content/test/Elephant/Label'. There are 1 directories and 38 images in '/content/test/Jaguar'. There are 0 directories and 38 images in '/content/test/Jaguar/Label'. There are 1 directories and 43 images in '/content/test/Centipede'. There are 0 directories and 43 images in '/content/test/Centipede/Label'. There are 1 directories and 43 images in '/content/test/Kangaroo'. There are 0 directories and 43 images in '/content/test/Kangaroo/Label'. There are 1 directories and 27 images in '/content/test/Camel'. There are 0 directories and 27 images in '/content/test/Camel/Label'. There are 1 directories and 77 images in '/content/test/Shrimp'. There are 0 directories and 77 images in '/content/test/Shrimp/Label'. There are 1 directories and 22 images in '/content/test/Hippopotamus'. There are 0 directories and 22 images in '/content/test/Hippopotamus/Label'. There are 1 directories and 1 images in '/content/test/Tick'. There are 0 directories and 1 images in '/content/test/Tick/Label'. There are 1 directories and 114 images in '/content/test/Crab'. There are 0 directories and 114 images in '/content/test/Crab/Label'. There are 1 directories and 39 images in '/content/test/Bear'. There are 0 directories and 39 images in '/content/test/Bear/Label'. There are 1 directories and 69 images in '/content/test/Hamster'. There are 0 directories and 69 images in '/content/test/Hamster/Label'. There are 1 directories and 35 images in '/content/test/Ladybug'. There are 0 directories and 35 images in '/content/test/Ladybug/Label'. There are 1 directories and 88 images in '/content/test/Duck'. There are 0 directories and 88 images in '/content/test/Duck/Label'. There are 1 directories and 178 images in '/content/test/Eagle'. There are 0 directories and 178 images in '/content/test/Eagle/Label'. There are 1 directories and 131 images in '/content/test/Sparrow'. There are 0 directories and 131 images in '/content/test/Sparrow/Label'. There are 1 directories and 61 images in '/content/test/Otter'. There are 0 directories and 61 images in '/content/test/Otter/Label'. There are 1 directories and 76 images in '/content/test/Ostrich'. There are 0 directories and 76 images in '/content/test/Ostrich/Label'. There are 1 directories and 70 images in '/content/test/Caterpillar'. There are 0 directories and 70 images in '/content/test/Caterpillar/Label'. There are 1 directories and 61 images in '/content/test/Harbor seal'. There are 0 directories and 61 images in '/content/test/Harbor seal/Label'. There are 1 directories and 77 images in '/content/test/Raven'. There are 0 directories and 77 images in '/content/test/Raven/Label'. There are 1 directories and 33 images in '/content/test/Magpie'. There are 0 directories and 33 images in '/content/test/Magpie/Label'. There are 1 directories and 70 images in '/content/test/Owl'. There are 0 directories and 70 images in '/content/test/Owl/Label'. There are 1 directories and 143 images in '/content/test/Horse'. There are 0 directories and 143 images in '/content/test/Horse/Label'. There are 1 directories and 43 images in '/content/test/Turkey'. There are 0 directories and 43 images in '/content/test/Turkey/Label'. There are 1 directories and 180 images in '/content/test/Parrot'. There are 0 directories and 180 images in '/content/test/Parrot/Label'. There are 1 directories and 100 images in '/content/test/Lion'. There are 0 directories and 100 images in '/content/test/Lion/Label'. There are 1 directories and 77 images in '/content/test/Frog'. There are 0 directories and 77 images in '/content/test/Frog/Label'. There are 1 directories and 44 images in '/content/test/Scorpion'. There are 0 directories and 44 images in '/content/test/Scorpion/Label'. There are 1 directories and 58 images in '/content/test/Shark'. There are 0 directories and 58 images in '/content/test/Shark/Label'. There are 1 directories and 24 images in '/content/test/Koala'. There are 0 directories and 24 images in '/content/test/Koala/Label'. There are 1 directories and 321 images in '/content/test/Monkey'. There are 0 directories and 321 images in '/content/test/Monkey/Label'. There are 1 directories and 171 images in '/content/test/Cattle'. There are 0 directories and 171 images in '/content/test/Cattle/Label'. There are 1 directories and 213 images in '/content/test/Snake'. There are 0 directories and 213 images in '/content/test/Snake/Label'. There are 1 directories and 126 images in '/content/test/Rabbit'. There are 0 directories and 126 images in '/content/test/Rabbit/Label'. There are 1 directories and 49 images in '/content/test/Hedgehog'. There are 0 directories and 49 images in '/content/test/Hedgehog/Label'. There are 1 directories and 177 images in '/content/test/Deer'. There are 0 directories and 177 images in '/content/test/Deer/Label'. There are 1 directories and 34 images in '/content/test/Lynx'. There are 0 directories and 34 images in '/content/test/Lynx/Label'. There are 1 directories and 94 images in '/content/test/Goat'. There are 0 directories and 94 images in '/content/test/Goat/Label'. There are 1 directories and 31 images in '/content/test/Goldfish'. There are 0 directories and 31 images in '/content/test/Goldfish/Label'. There are 1 directories and 5 images in '/content/test/Turtle'. There are 0 directories and 5 images in '/content/test/Turtle/Label'. There are 1 directories and 137 images in '/content/test/Chicken'. There are 0 directories and 137 images in '/content/test/Chicken/Label'. There are 1 directories and 96 images in '/content/test/Pig'. There are 0 directories and 96 images in '/content/test/Pig/Label'. There are 1 directories and 74 images in '/content/test/Sheep'. There are 0 directories and 74 images in '/content/test/Sheep/Label'. There are 1 directories and 13 images in '/content/test/Squid'. There are 0 directories and 13 images in '/content/test/Squid/Label'. There are 1 directories and 55 images in '/content/test/Polar bear'. There are 0 directories and 55 images in '/content/test/Polar bear/Label'. There are 1 directories and 57 images in '/content/test/Leopard'. There are 0 directories and 57 images in '/content/test/Leopard/Label'. There are 1 directories and 61 images in '/content/test/Penguin'. There are 0 directories and 61 images in '/content/test/Penguin/Label'. There are 1 directories and 29 images in '/content/test/Moths and butterflies'. There are 0 directories and 29 images in '/content/test/Moths and butterflies/Label'. There are 1 directories and 69 images in '/content/test/Fox'. There are 0 directories and 69 images in '/content/test/Fox/Label'. There are 1 directories and 33 images in '/content/test/Seahorse'. There are 0 directories and 33 images in '/content/test/Seahorse/Label'. There are 1 directories and 207 images in '/content/test/Spider'. There are 0 directories and 207 images in '/content/test/Spider/Label'. There are 1 directories and 83 images in '/content/test/Mouse'. There are 0 directories and 83 images in '/content/test/Mouse/Label'. There are 1 directories and 52 images in '/content/test/Whale'. There are 0 directories and 52 images in '/content/test/Whale/Label'. There are 1 directories and 170 images in '/content/test/Butterfly'. There are 0 directories and 170 images in '/content/test/Butterfly/Label'. There are 1 directories and 92 images in '/content/test/Jellyfish'. There are 0 directories and 92 images in '/content/test/Jellyfish/Label'. There are 1 directories and 36 images in '/content/test/Mule'. There are 0 directories and 36 images in '/content/test/Mule/Label'. There are 1 directories and 76 images in '/content/test/Crocodile'. There are 0 directories and 76 images in '/content/test/Crocodile/Label'. There are 1 directories and 34 images in '/content/test/Rhinoceros'. There are 0 directories and 34 images in '/content/test/Rhinoceros/Label'.
Data generators¶
setting up train and test directories path
# setting up directories
train_dir = "/content/train"
test_dir = "/content/test"
train_data_gen = ImageDataGenerator()
test_data_gen = ImageDataGenerator()
IMG_SIZE = (224, 224)
train_data = tf.keras.preprocessing.image_dataset_from_directory(train_dir,
label_mode="categorical",
image_size=IMG_SIZE)
test_data = tf.keras.preprocessing.image_dataset_from_directory(test_dir,
label_mode="categorical",
image_size=IMG_SIZE,
shuffle=False)
Found 22566 files belonging to 80 classes. Found 6505 files belonging to 80 classes.
len(train_data), len(test_data)
(706, 204)
class_names = train_data.class_names
train_data.take(1)
<TakeDataset shapes: ((None, 224, 224, 3), (None, 80)), types: (tf.float32, tf.float32)>
train_data
<BatchDataset shapes: ((None, 224, 224, 3), (None, 80)), types: (tf.float32, tf.float32)>
Visualizing a single random image¶
# Visulaizing random images
arr =[]
label = random.choice(class_names)
path = "/content/test"
basepath = os.path.join(path, label)
for fname in os.listdir(basepath):
path = os.path.join(basepath, fname)
if not os.path.isdir(path):
# skip directories
arr.append(path)
continue
# Plotting the image
choice = random.choice(arr)
img = plt.imread(choice)
plt.axis(False)
plt.title(label)
plt.imshow(img/255.);
def show():
arr =[]
label = random.choice(class_names)
path = "/content/train"
basepath = os.path.join(path, label)
for fname in os.listdir(basepath):
path = os.path.join(basepath, fname)
if not os.path.isdir(path):
# skip directories
arr.append(path)
continue
# Plotting the image
img = random.choice(arr)
return img, label
img, label = show()
img = plt.imread(img)
plt.axis(False)
plt.title(label)
plt.imshow(img/255.);
Visualizing multiple random images¶
fig, axs = plt.subplots(3, 3)
fig.set_size_inches(18.5, 12.5)
for i in range(0, 3):
for j in range(0, 3):
img, label = show()
img = plt.imread(img)
axs[i, j].axis(False)
axs[i, j].set_title(label, color="green")
axs[i, j].imshow(img/255.)
Data Augmentation Layer¶
# Create a data augmentation stage with horizontal flipping, rotations, zooms
data_augmentation = keras.Sequential([
preprocessing.RandomFlip("horizontal"),
preprocessing.RandomRotation(0.2),
preprocessing.RandomZoom(0.2),
preprocessing.RandomHeight(0.2),
preprocessing.RandomWidth(0.2)
], name ="data_augmentation")
Visualizing Augmented images¶
# Orignal Image
bpath = "/content/train"
c_class = random.choice(class_names)
fpath = os.path.join(bpath, c_class)
c_file = random.choice(os.listdir(fpath))
if c_file == "Label":
c_file = random.choice(os.listdir(fpath))
path = os.path.join(fpath, c_file)
img = plt.imread(path)
plt.axis(False)
plt.imshow(img)
plt.title(f"Original Image: {c_class}", color="green");
# Augment the image
augmented_img = data_augmentation(tf.expand_dims(img, axis=0))
plt.figure()
plt.imshow(tf.squeeze(augmented_img)/255.)
plt.title(f"Augmented image: {c_class}", color="blue")
plt.axis(False);
Models¶
From here we will train a series of different models till we get a satisfying accuracy.
Model 1 - EfficientNetB0¶
Training our first model - EfficientNetB0
# Setting up base model
base_model = tf.keras.applications.EfficientNetB0(include_top=False)
base_model.trainable = False
# Setup model architecture with trainable top layers
inputs = tf.keras.layers.Input(shape=(224, 224, 3), name="input_layer")
x = data_augmentation(inputs)
x = base_model(x, training=False)
x = tf.keras.layers.GlobalAveragePooling2D(name="Global_average_pooling_layer")(x)
outputs = tf.keras.layers.Dense(len(class_names), activation="softmax", name="output_layer")(x)
model_1 = tf.keras.Model(inputs, outputs)
# Compiling the model
model_1.compile(loss=tf.keras.losses.categorical_crossentropy,
optimizer=tf.keras.optimizers.Adam(),
metrics=["accuracy"])
# Fitting the model
history_1 = model_1.fit(train_data,
epochs=5,
steps_per_epoch=len(train_data),
validation_data=test_data,
validation_steps=0.25 * len(test_data))
Epoch 1/5 706/706 [==============================] - 385s 494ms/step - loss: 1.4190 - accuracy: 0.6520 - val_loss: 0.8912 - val_accuracy: 0.7567 Epoch 2/5 706/706 [==============================] - 309s 434ms/step - loss: 0.8786 - accuracy: 0.7439 - val_loss: 0.7554 - val_accuracy: 0.7868 Epoch 3/5 706/706 [==============================] - 285s 400ms/step - loss: 0.7810 - accuracy: 0.7640 - val_loss: 0.7334 - val_accuracy: 0.7911 Epoch 4/5 706/706 [==============================] - 274s 385ms/step - loss: 0.7119 - accuracy: 0.7820 - val_loss: 0.7344 - val_accuracy: 0.7880 Epoch 5/5 706/706 [==============================] - 269s 378ms/step - loss: 0.6792 - accuracy: 0.7889 - val_loss: 0.7456 - val_accuracy: 0.7831
model_1.evaluate(test_data)
204/204 [==============================] - 66s 325ms/step - loss: 0.6638 - accuracy: 0.8020
[0.6637811064720154, 0.8019984364509583]
plot_loss_curves(history_1)
model_1.summary()
Model: "model_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_layer (InputLayer) [(None, 224, 224, 3)] 0 _________________________________________________________________ data_augmentation (Sequentia (None, None, None, 3) 0 _________________________________________________________________ efficientnetb0 (Functional) (None, None, None, 1280) 4049571 _________________________________________________________________ Global_average_pooling_layer (None, 1280) 0 _________________________________________________________________ output_layer (Dense) (None, 80) 102480 ================================================================= Total params: 4,152,051 Trainable params: 102,480 Non-trainable params: 4,049,571 _________________________________________________________________
Fine tuning Model 1¶
# Unfreeze all of the layers in the base model
base_model.trainable = True
# Refreeze every layer except for the last 5
for layer in base_model.layers[:-5]:
layer.trainable = False
# Recompile model with lower learning rate
model_1.compile(loss='categorical_crossentropy',
optimizer=tf.keras.optimizers.Adam(1e-4),
metrics=['accuracy'])
# Fine-tune for 5 more epochs
fine_tune_epochs = 10
history_1_fine_tune_1 = model_1.fit(train_data,
epochs=fine_tune_epochs,
validation_data=test_data,
validation_steps=0.25 * len(test_data),
initial_epoch=history_1.epoch[-1])
Epoch 5/10 706/706 [==============================] - 277s 379ms/step - loss: 0.6091 - accuracy: 0.8048 - val_loss: 0.7274 - val_accuracy: 0.7837 Epoch 6/10 706/706 [==============================] - 264s 371ms/step - loss: 0.5571 - accuracy: 0.8183 - val_loss: 0.7294 - val_accuracy: 0.7837 Epoch 7/10 706/706 [==============================] - 260s 365ms/step - loss: 0.5251 - accuracy: 0.8263 - val_loss: 0.7169 - val_accuracy: 0.7849 Epoch 8/10 706/706 [==============================] - 260s 364ms/step - loss: 0.4930 - accuracy: 0.8336 - val_loss: 0.7313 - val_accuracy: 0.7825 Epoch 9/10 706/706 [==============================] - 260s 365ms/step - loss: 0.4660 - accuracy: 0.8429 - val_loss: 0.7299 - val_accuracy: 0.7812 Epoch 10/10 706/706 [==============================] - 258s 362ms/step - loss: 0.4344 - accuracy: 0.8517 - val_loss: 0.7542 - val_accuracy: 0.7757
model_1.evaluate(test_data)
204/204 [==============================] - 66s 324ms/step - loss: 0.6667 - accuracy: 0.8040
[0.6667168736457825, 0.8039969205856323]
plot_loss_curves(history_1_fine_tune_1)
compare_historys(history_1, history_1_fine_tune_1, initial_epochs=5)
Model 2 - MobilienetV2¶
For the second model we will train a MobilienetV2
# Setting up base model
base_model = tf.keras.applications.MobileNetV2(include_top=False)
base_model.trainable = False
# Setup model architecture with trainable top layers
inputs = tf.keras.layers.Input(shape=(224, 224, 3), name="input_layer")
x = tf.keras.layers.experimental.preprocessing.Rescaling(1./255)(inputs)
x = data_augmentation(x)
x = base_model(x, training=False)
x = tf.keras.layers.GlobalAveragePooling2D(name="Global_average_pooling_layer")(x)
outputs = tf.keras.layers.Dense(len(class_names), activation="softmax", name="output_layer")(x)
model_2 = tf.keras.Model(inputs, outputs)
# Compiling the model
model_2.compile(loss=tf.keras.losses.categorical_crossentropy,
optimizer=tf.keras.optimizers.Adam(),
metrics=["accuracy"])
# Fitting the model
history_2 = model_2.fit(train_data,
epochs=5,
steps_per_epoch=len(train_data),
validation_data=test_data,
validation_steps=0.25 * len(test_data))
706/706 [==============================] - 265s 367ms/step - loss: 1.7895 - accuracy: 0.5510 - val_loss: 1.2218 - val_accuracy: 0.6765 Epoch 2/5 706/706 [==============================] - 250s 351ms/step - loss: 1.2853 - accuracy: 0.6457 - val_loss: 1.0012 - val_accuracy: 0.7120 Epoch 3/5 706/706 [==============================] - 250s 350ms/step - loss: 1.1951 - accuracy: 0.6644 - val_loss: 0.9694 - val_accuracy: 0.7206 Epoch 4/5 706/706 [==============================] - 249s 350ms/step - loss: 1.1450 - accuracy: 0.6748 - val_loss: 0.9085 - val_accuracy: 0.7255 Epoch 5/5 706/706 [==============================] - 248s 348ms/step - loss: 1.0904 - accuracy: 0.6878 - val_loss: 0.8025 - val_accuracy: 0.7610
model_2.evaluate(test_data)
204/204 [==============================] - 62s 304ms/step - loss: 0.9046 - accuracy: 0.7470
[0.9046367406845093, 0.7469638586044312]
plot_loss_curves(history_2)
model_2.summary()
Model: "model_2" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_layer (InputLayer) [(None, 224, 224, 3)] 0 _________________________________________________________________ rescaling_6 (Rescaling) (None, 224, 224, 3) 0 _________________________________________________________________ data_augmentation (Sequentia (None, None, None, 3) 0 _________________________________________________________________ mobilenetv2_1.00_224 (Functi (None, None, None, 1280) 2257984 _________________________________________________________________ Global_average_pooling_layer (None, 1280) 0 _________________________________________________________________ output_layer (Dense) (None, 80) 102480 ================================================================= Total params: 2,360,464 Trainable params: 102,480 Non-trainable params: 2,257,984 _________________________________________________________________
Training for more 5 epochs¶
# Fitting the model for more 5 epochs
history_3 = model_2.fit(train_data,
epochs=10,
steps_per_epoch=len(train_data),
validation_data=test_data,
validation_steps=0.25 * len(test_data),
initial_epoch=history_2.epoch[-1])
Epoch 5/10 706/706 [==============================] - 249s 349ms/step - loss: 1.0426 - accuracy: 0.7021 - val_loss: 0.8679 - val_accuracy: 0.7433 Epoch 6/10 706/706 [==============================] - 246s 345ms/step - loss: 1.0269 - accuracy: 0.6988 - val_loss: 0.9560 - val_accuracy: 0.7249 Epoch 7/10 706/706 [==============================] - 245s 343ms/step - loss: 1.0244 - accuracy: 0.7042 - val_loss: 0.9563 - val_accuracy: 0.7175 Epoch 8/10 706/706 [==============================] - 245s 344ms/step - loss: 0.9984 - accuracy: 0.7081 - val_loss: 1.0016 - val_accuracy: 0.7077 Epoch 9/10 706/706 [==============================] - 244s 342ms/step - loss: 0.9750 - accuracy: 0.7147 - val_loss: 0.9553 - val_accuracy: 0.7224 Epoch 10/10 706/706 [==============================] - 244s 343ms/step - loss: 0.9610 - accuracy: 0.7140 - val_loss: 1.0077 - val_accuracy: 0.7181
model_2.evaluate(test_data)
204/204 [==============================] - 62s 302ms/step - loss: 0.9911 - accuracy: 0.7305
[0.9911302328109741, 0.7305150032043457]
plot_loss_curves(history_3)
Fine tuning Model 2¶
# Unfreeze all of the layers in the base model
base_model.trainable = True
# Refreeze every layer except for the last 5
for layer in base_model.layers[:-15]:
layer.trainable = False
# Recompile model with lower learning rate
model_2.compile(loss='categorical_crossentropy',
optimizer=tf.keras.optimizers.Adam(1e-4),
metrics=['accuracy'])
# Fine-tune for 5 more epochs
fine_tune_epochs = 10
history_2_fine_tune_1 = model_2.fit(train_data,
epochs=fine_tune_epochs,
validation_data=test_data,
validation_steps=0.25 * len(test_data),
initial_epoch=history_2.epoch[-1])
Epoch 5/10 706/706 [==============================] - 251s 349ms/step - loss: 1.0266 - accuracy: 0.6926 - val_loss: 1.1079 - val_accuracy: 0.6881 Epoch 6/10 706/706 [==============================] - 247s 347ms/step - loss: 0.9163 - accuracy: 0.7162 - val_loss: 1.0497 - val_accuracy: 0.6961 Epoch 7/10 706/706 [==============================] - 247s 346ms/step - loss: 0.8299 - accuracy: 0.7394 - val_loss: 1.0409 - val_accuracy: 0.7145 Epoch 8/10 706/706 [==============================] - 245s 344ms/step - loss: 0.7846 - accuracy: 0.7507 - val_loss: 1.0553 - val_accuracy: 0.6998 Epoch 9/10 706/706 [==============================] - 245s 343ms/step - loss: 0.7434 - accuracy: 0.7647 - val_loss: 1.0220 - val_accuracy: 0.7010 Epoch 10/10 706/706 [==============================] - 244s 342ms/step - loss: 0.6925 - accuracy: 0.7787 - val_loss: 1.1357 - val_accuracy: 0.6906
plot_loss_curves(history_2_fine_tune_1)
compare_historys(history_2, history_2_fine_tune_1, initial_epochs=5)
Saving and loading the best model¶
model_1.save("/content/drive/MyDrive/Vertebrate")
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/utils/generic_utils.py:497: CustomMaskWarning: Custom mask layers require a config and must override get_config. When loading, the custom mask layer must be passed to the custom_objects argument. category=CustomMaskWarning)
INFO:tensorflow:Assets written to: /content/drive/MyDrive/Vertebrate/assets
model = tf.keras.models.load_model("/content/drive/MyDrive/Vertebrate")
WARNING:absl:Importing a function (__inference_block6a_expand_activation_layer_call_and_return_conditional_losses_120504) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block6d_activation_layer_call_and_return_conditional_losses_158712) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block2b_expand_activation_layer_call_and_return_conditional_losses_119152) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block3b_activation_layer_call_and_return_conditional_losses_155183) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block5b_activation_layer_call_and_return_conditional_losses_120218) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block6d_expand_activation_layer_call_and_return_conditional_losses_120955) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block5c_se_reduce_layer_call_and_return_conditional_losses_120408) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block6a_activation_layer_call_and_return_conditional_losses_120529) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block5c_activation_layer_call_and_return_conditional_losses_157291) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block4a_activation_layer_call_and_return_conditional_losses_119628) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block6b_se_reduce_layer_call_and_return_conditional_losses_120704) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block6b_se_reduce_layer_call_and_return_conditional_losses_158018) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block5b_expand_activation_layer_call_and_return_conditional_losses_156847) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block1a_activation_layer_call_and_return_conditional_losses_153856) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_efficientnetb0_layer_call_and_return_conditional_losses_149406) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block3b_expand_activation_layer_call_and_return_conditional_losses_119448) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block7a_activation_layer_call_and_return_conditional_losses_121134) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_top_activation_layer_call_and_return_conditional_losses_121250) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block6b_activation_layer_call_and_return_conditional_losses_120669) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block4c_expand_activation_layer_call_and_return_conditional_losses_156160) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block2b_activation_layer_call_and_return_conditional_losses_119176) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block6d_expand_activation_layer_call_and_return_conditional_losses_158635) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block2a_activation_layer_call_and_return_conditional_losses_119036) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block6c_expand_activation_layer_call_and_return_conditional_losses_120800) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block7a_expand_activation_layer_call_and_return_conditional_losses_121110) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block3a_se_reduce_layer_call_and_return_conditional_losses_119367) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_model_1_layer_call_and_return_conditional_losses_142063) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_model_1_layer_call_and_return_conditional_losses_140102) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_stem_activation_layer_call_and_return_conditional_losses_153779) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block4a_expand_activation_layer_call_and_return_conditional_losses_119603) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block5a_expand_activation_layer_call_and_return_conditional_losses_120054) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block6c_expand_activation_layer_call_and_return_conditional_losses_158268) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_efficientnetb0_layer_call_and_return_conditional_losses_133028) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block4c_expand_activation_layer_call_and_return_conditional_losses_119899) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_efficientnetb0_layer_call_and_return_conditional_losses_151165) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block4c_se_reduce_layer_call_and_return_conditional_losses_156277) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block6a_se_reduce_layer_call_and_return_conditional_losses_157698) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block6b_activation_layer_call_and_return_conditional_losses_157978) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block5b_se_reduce_layer_call_and_return_conditional_losses_120253) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block3a_activation_layer_call_and_return_conditional_losses_154863) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block4a_activation_layer_call_and_return_conditional_losses_155550) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block1a_se_reduce_layer_call_and_return_conditional_losses_153896) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block3b_se_reduce_layer_call_and_return_conditional_losses_119507) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block5a_se_reduce_layer_call_and_return_conditional_losses_156644) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block4c_activation_layer_call_and_return_conditional_losses_119923) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_top_activation_layer_call_and_return_conditional_losses_159322) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block5b_activation_layer_call_and_return_conditional_losses_156924) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_efficientnetb0_layer_call_and_return_conditional_losses_146023) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block7a_se_reduce_layer_call_and_return_conditional_losses_121169) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block5a_activation_layer_call_and_return_conditional_losses_156604) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block3a_expand_activation_layer_call_and_return_conditional_losses_154786) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block4a_se_reduce_layer_call_and_return_conditional_losses_119663) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block2a_expand_activation_layer_call_and_return_conditional_losses_119011) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_stem_activation_layer_call_and_return_conditional_losses_118871) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block3a_expand_activation_layer_call_and_return_conditional_losses_119307) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block6c_se_reduce_layer_call_and_return_conditional_losses_120859) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block3b_se_reduce_layer_call_and_return_conditional_losses_155223) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block6d_activation_layer_call_and_return_conditional_losses_120979) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block4b_activation_layer_call_and_return_conditional_losses_119768) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block5c_expand_activation_layer_call_and_return_conditional_losses_120349) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block6b_expand_activation_layer_call_and_return_conditional_losses_120645) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block2b_expand_activation_layer_call_and_return_conditional_losses_154419) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block6a_activation_layer_call_and_return_conditional_losses_157658) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block5a_expand_activation_layer_call_and_return_conditional_losses_156527) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block4a_se_reduce_layer_call_and_return_conditional_losses_155590) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block5c_se_reduce_layer_call_and_return_conditional_losses_157331) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block4b_se_reduce_layer_call_and_return_conditional_losses_155910) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block2a_expand_activation_layer_call_and_return_conditional_losses_154099) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block3a_activation_layer_call_and_return_conditional_losses_119332) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block4a_expand_activation_layer_call_and_return_conditional_losses_155473) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block6d_se_reduce_layer_call_and_return_conditional_losses_121014) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block5c_expand_activation_layer_call_and_return_conditional_losses_157214) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block4b_expand_activation_layer_call_and_return_conditional_losses_155793) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_efficientnetb0_layer_call_and_return_conditional_losses_147782) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block2b_se_reduce_layer_call_and_return_conditional_losses_154536) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block5c_activation_layer_call_and_return_conditional_losses_120373) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block4b_se_reduce_layer_call_and_return_conditional_losses_119803) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block6a_expand_activation_layer_call_and_return_conditional_losses_157581) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block1a_se_reduce_layer_call_and_return_conditional_losses_118930) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block7a_activation_layer_call_and_return_conditional_losses_159079) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block5b_expand_activation_layer_call_and_return_conditional_losses_120194) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block4c_activation_layer_call_and_return_conditional_losses_156237) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block6a_se_reduce_layer_call_and_return_conditional_losses_120564) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block6c_activation_layer_call_and_return_conditional_losses_158345) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block2a_se_reduce_layer_call_and_return_conditional_losses_119071) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block6c_activation_layer_call_and_return_conditional_losses_120824) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block2b_activation_layer_call_and_return_conditional_losses_154496) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block6d_se_reduce_layer_call_and_return_conditional_losses_158752) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block6c_se_reduce_layer_call_and_return_conditional_losses_158385) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block2a_se_reduce_layer_call_and_return_conditional_losses_154216) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_efficientnetb0_layer_call_and_return_conditional_losses_129358) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block3b_activation_layer_call_and_return_conditional_losses_119472) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block6b_expand_activation_layer_call_and_return_conditional_losses_157901) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block5a_se_reduce_layer_call_and_return_conditional_losses_120113) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block2a_activation_layer_call_and_return_conditional_losses_154176) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block3a_se_reduce_layer_call_and_return_conditional_losses_154903) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block2b_se_reduce_layer_call_and_return_conditional_losses_119211) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block5a_activation_layer_call_and_return_conditional_losses_120078) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block1a_activation_layer_call_and_return_conditional_losses_118895) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block4b_expand_activation_layer_call_and_return_conditional_losses_119744) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block5b_se_reduce_layer_call_and_return_conditional_losses_156964) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block7a_se_reduce_layer_call_and_return_conditional_losses_159119) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block3b_expand_activation_layer_call_and_return_conditional_losses_155106) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference__wrapped_model_111367) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block4b_activation_layer_call_and_return_conditional_losses_155870) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block4c_se_reduce_layer_call_and_return_conditional_losses_119958) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:absl:Importing a function (__inference_block7a_expand_activation_layer_call_and_return_conditional_losses_159002) with ops with custom gradients. Will likely fail if a gradient is requested.
Insights of the Model¶
pred_probs = model.predict(test_data)
pred_probs[0]
array([9.95746434e-01, 7.98750261e-04, 5.36264724e-06, 5.04913260e-07, 1.80242228e-06, 2.08619166e-08, 9.17762882e-05, 5.99489340e-06, 3.99222199e-06, 1.12156783e-07, 9.10530889e-06, 3.89620510e-07, 2.17189694e-07, 9.07555091e-07, 5.87315844e-06, 3.00074444e-05, 8.74887701e-06, 1.51713975e-05, 9.47852095e-06, 6.63890069e-06, 6.48519699e-06, 1.41525030e-04, 1.03216178e-06, 1.35053469e-05, 8.20562946e-06, 2.04325170e-06, 6.34944445e-05, 1.47582487e-05, 9.34305717e-06, 5.56890996e-08, 4.82232281e-06, 5.42736927e-07, 1.75819594e-06, 2.15488535e-06, 4.47753692e-08, 1.30741171e-06, 1.69339437e-05, 1.89190686e-07, 8.86139969e-06, 2.32440839e-03, 1.22100903e-06, 1.60957170e-05, 4.07277548e-06, 2.14648855e-04, 2.53343387e-05, 1.63850160e-08, 6.32403589e-06, 5.81048774e-08, 7.91709408e-06, 1.04315208e-04, 5.36949738e-05, 3.50652613e-07, 2.34109993e-05, 9.28517875e-06, 4.08919550e-06, 2.30036420e-07, 2.73630144e-06, 1.90511764e-06, 7.23328696e-07, 2.31042932e-07, 8.93910510e-08, 2.13830631e-07, 6.72375245e-06, 2.75089769e-05, 4.32019442e-05, 1.82281315e-06, 7.52797291e-07, 1.59899798e-07, 4.86101380e-05, 9.77739205e-07, 1.11960208e-06, 2.73184014e-06, 8.93133318e-08, 1.21727635e-05, 1.03689752e-06, 1.62140748e-06, 7.72632302e-06, 2.41787575e-07, 3.57819863e-06, 9.51070973e-08], dtype=float32)
pred_probs[0].argmax()
0
pred_classes = pred_probs.argmax(axis=1)
pred_classes[:10]
array([ 0, 0, 54, 54, 0, 54, 40, 54, 54, 54])
len(pred_classes), len(pred_probs)
(6505, 6505)
y_labels = []
for images, labels in test_data.unbatch(): # unbatch the test data and get images and labels
y_labels.append(labels.numpy().argmax()) # append the index which has the largest value (labels are one-hot)
y_labels[:10]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
len(y_labels)
6505
The matrix will be more clear and easy to understand if you open the image in a new tab and zoom in a bit.
Plotting Confusion matrix¶
# Plotting a confusion matrix
make_confusion_matrix(y_true=y_labels,
y_pred=pred_classes,
classes=class_names,
figsize=(100, 100),
text_size=20,
norm=False,
savefig=True)
Classification report¶
print(classification_report(y_labels, pred_classes))
precision recall f1-score support 0 0.73 0.56 0.64 39 1 0.69 0.87 0.77 39 2 0.42 0.56 0.48 73 3 0.90 0.95 0.92 170 4 0.86 0.89 0.87 27 5 0.44 0.88 0.58 16 6 0.69 0.66 0.67 70 7 0.82 0.37 0.51 171 8 0.76 0.81 0.79 43 9 0.44 0.54 0.49 35 10 0.89 0.93 0.91 137 11 0.77 0.96 0.85 114 12 0.99 0.91 0.95 76 13 0.89 0.86 0.88 177 14 0.76 0.55 0.64 88 15 0.93 0.93 0.93 178 16 0.81 0.91 0.86 33 17 0.92 0.71 0.80 617 18 0.91 0.99 0.94 69 19 0.90 0.92 0.91 77 20 0.79 0.83 0.81 23 21 0.73 0.67 0.70 94 22 0.47 0.65 0.54 31 23 0.26 0.24 0.25 33 24 0.53 0.26 0.35 69 25 0.41 0.64 0.50 61 26 1.00 0.94 0.97 49 27 0.76 0.73 0.74 22 28 0.70 0.87 0.78 143 29 0.38 0.26 0.31 38 30 0.94 0.97 0.95 92 31 0.84 0.88 0.86 43 32 0.81 0.92 0.86 24 33 0.89 0.97 0.93 35 34 0.48 0.51 0.50 57 35 0.93 0.96 0.95 100 36 0.91 0.97 0.94 260 37 0.87 0.76 0.81 34 38 0.74 0.97 0.84 33 39 0.95 0.96 0.96 321 40 0.70 0.48 0.57 29 41 0.54 0.64 0.59 83 42 0.73 0.53 0.61 36 43 0.96 0.97 0.97 76 44 0.79 0.72 0.75 61 45 0.92 0.87 0.90 70 46 0.89 0.89 0.89 19 47 0.96 0.91 0.94 180 48 0.78 0.95 0.86 61 49 0.91 0.85 0.88 96 50 0.82 0.98 0.89 55 51 0.91 0.93 0.92 126 52 0.98 0.86 0.92 51 53 0.88 0.78 0.83 77 54 0.72 0.98 0.83 42 55 0.44 0.91 0.60 34 56 0.98 1.00 0.99 44 57 0.48 0.23 0.31 48 58 0.46 0.60 0.52 87 59 0.75 0.09 0.16 33 60 0.34 0.97 0.50 58 61 0.75 0.73 0.74 74 62 0.92 0.44 0.60 77 63 0.99 0.95 0.97 114 64 0.93 0.95 0.94 213 65 0.95 0.86 0.90 131 66 0.98 0.97 0.97 207 67 0.22 0.15 0.18 13 68 0.77 0.81 0.79 68 69 0.82 0.96 0.88 55 70 0.56 0.89 0.69 64 71 0.00 0.00 0.00 1 72 0.67 0.92 0.77 26 73 0.62 0.50 0.55 107 74 0.88 0.70 0.78 43 75 0.00 0.00 0.00 5 76 0.80 0.77 0.78 52 77 0.76 0.91 0.83 32 78 0.52 0.80 0.63 15 79 0.94 1.00 0.97 31 accuracy 0.80 6505 macro avg 0.74 0.75 0.73 6505 weighted avg 0.82 0.80 0.80 6505
# Get a dictionary of the classification report
classification_report_dict = classification_report(y_labels, pred_classes, output_dict=True)
classification_report_dict
{'0': {'f1-score': 0.6376811594202899, 'precision': 0.7333333333333333, 'recall': 0.5641025641025641, 'support': 39}, '1': {'f1-score': 0.7727272727272728, 'precision': 0.6938775510204082, 'recall': 0.8717948717948718, 'support': 39}, '10': {'f1-score': 0.9110320284697508, 'precision': 0.8888888888888888, 'recall': 0.9343065693430657, 'support': 137}, '11': {'f1-score': 0.8549019607843137, 'precision': 0.7730496453900709, 'recall': 0.956140350877193, 'support': 114}, '12': {'f1-score': 0.9452054794520548, 'precision': 0.9857142857142858, 'recall': 0.9078947368421053, 'support': 76}, '13': {'f1-score': 0.8767908309455587, 'precision': 0.8895348837209303, 'recall': 0.864406779661017, 'support': 177}, '14': {'f1-score': 0.6357615894039735, 'precision': 0.7619047619047619, 'recall': 0.5454545454545454, 'support': 88}, '15': {'f1-score': 0.9325842696629213, 'precision': 0.9325842696629213, 'recall': 0.9325842696629213, 'support': 178}, '16': {'f1-score': 0.8571428571428571, 'precision': 0.8108108108108109, 'recall': 0.9090909090909091, 'support': 33}, '17': {'f1-score': 0.8032786885245903, 'precision': 0.9168399168399168, 'recall': 0.7147487844408428, 'support': 617}, '18': {'f1-score': 0.9444444444444444, 'precision': 0.9066666666666666, 'recall': 0.9855072463768116, 'support': 69}, '19': {'f1-score': 0.9102564102564102, 'precision': 0.8987341772151899, 'recall': 0.922077922077922, 'support': 77}, '2': {'f1-score': 0.4795321637426901, 'precision': 0.41836734693877553, 'recall': 0.5616438356164384, 'support': 73}, '20': {'f1-score': 0.8085106382978724, 'precision': 0.7916666666666666, 'recall': 0.8260869565217391, 'support': 23}, '21': {'f1-score': 0.7, 'precision': 0.7325581395348837, 'recall': 0.6702127659574468, 'support': 94}, '22': {'f1-score': 0.5405405405405406, 'precision': 0.46511627906976744, 'recall': 0.6451612903225806, 'support': 31}, '23': {'f1-score': 0.25, 'precision': 0.25806451612903225, 'recall': 0.24242424242424243, 'support': 33}, '24': {'f1-score': 0.34951456310679613, 'precision': 0.5294117647058824, 'recall': 0.2608695652173913, 'support': 69}, '25': {'f1-score': 0.5, 'precision': 0.4105263157894737, 'recall': 0.639344262295082, 'support': 61}, '26': {'f1-score': 0.968421052631579, 'precision': 1.0, 'recall': 0.9387755102040817, 'support': 49}, '27': {'f1-score': 0.7441860465116279, 'precision': 0.7619047619047619, 'recall': 0.7272727272727273, 'support': 22}, '28': {'f1-score': 0.7763975155279503, 'precision': 0.6983240223463687, 'recall': 0.8741258741258742, 'support': 143}, '29': {'f1-score': 0.3125, 'precision': 0.38461538461538464, 'recall': 0.2631578947368421, 'support': 38}, '3': {'f1-score': 0.9230769230769231, 'precision': 0.8950276243093923, 'recall': 0.9529411764705882, 'support': 170}, '30': {'f1-score': 0.9518716577540107, 'precision': 0.9368421052631579, 'recall': 0.967391304347826, 'support': 92}, '31': {'f1-score': 0.8636363636363636, 'precision': 0.8444444444444444, 'recall': 0.8837209302325582, 'support': 43}, '32': {'f1-score': 0.8627450980392156, 'precision': 0.8148148148148148, 'recall': 0.9166666666666666, 'support': 24}, '33': {'f1-score': 0.9315068493150684, 'precision': 0.8947368421052632, 'recall': 0.9714285714285714, 'support': 35}, '34': {'f1-score': 0.4957264957264957, 'precision': 0.48333333333333334, 'recall': 0.5087719298245614, 'support': 57}, '35': {'f1-score': 0.9458128078817734, 'precision': 0.9320388349514563, 'recall': 0.96, 'support': 100}, '36': {'f1-score': 0.9402985074626865, 'precision': 0.9130434782608695, 'recall': 0.9692307692307692, 'support': 260}, '37': {'f1-score': 0.8125, 'precision': 0.8666666666666667, 'recall': 0.7647058823529411, 'support': 34}, '38': {'f1-score': 0.8421052631578948, 'precision': 0.7441860465116279, 'recall': 0.9696969696969697, 'support': 33}, '39': {'f1-score': 0.958139534883721, 'precision': 0.9537037037037037, 'recall': 0.9626168224299065, 'support': 321}, '4': {'f1-score': 0.8727272727272727, 'precision': 0.8571428571428571, 'recall': 0.8888888888888888, 'support': 27}, '40': {'f1-score': 0.5714285714285714, 'precision': 0.7, 'recall': 0.4827586206896552, 'support': 29}, '41': {'f1-score': 0.5856353591160222, 'precision': 0.5408163265306123, 'recall': 0.6385542168674698, 'support': 83}, '42': {'f1-score': 0.6129032258064515, 'precision': 0.7307692307692307, 'recall': 0.5277777777777778, 'support': 36}, '43': {'f1-score': 0.9673202614379085, 'precision': 0.961038961038961, 'recall': 0.9736842105263158, 'support': 76}, '44': {'f1-score': 0.7521367521367521, 'precision': 0.7857142857142857, 'recall': 0.7213114754098361, 'support': 61}, '45': {'f1-score': 0.8970588235294117, 'precision': 0.9242424242424242, 'recall': 0.8714285714285714, 'support': 70}, '46': {'f1-score': 0.8947368421052632, 'precision': 0.8947368421052632, 'recall': 0.8947368421052632, 'support': 19}, '47': {'f1-score': 0.9371428571428572, 'precision': 0.9647058823529412, 'recall': 0.9111111111111111, 'support': 180}, '48': {'f1-score': 0.8592592592592593, 'precision': 0.7837837837837838, 'recall': 0.9508196721311475, 'support': 61}, '49': {'f1-score': 0.8817204301075269, 'precision': 0.9111111111111111, 'recall': 0.8541666666666666, 'support': 96}, '5': {'f1-score': 0.5833333333333334, 'precision': 0.4375, 'recall': 0.875, 'support': 16}, '50': {'f1-score': 0.8925619834710744, 'precision': 0.8181818181818182, 'recall': 0.9818181818181818, 'support': 55}, '51': {'f1-score': 0.9176470588235294, 'precision': 0.9069767441860465, 'recall': 0.9285714285714286, 'support': 126}, '52': {'f1-score': 0.9166666666666665, 'precision': 0.9777777777777777, 'recall': 0.8627450980392157, 'support': 51}, '53': {'f1-score': 0.8275862068965517, 'precision': 0.8823529411764706, 'recall': 0.7792207792207793, 'support': 77}, '54': {'f1-score': 0.8282828282828282, 'precision': 0.7192982456140351, 'recall': 0.9761904761904762, 'support': 42}, '55': {'f1-score': 0.596153846153846, 'precision': 0.44285714285714284, 'recall': 0.9117647058823529, 'support': 34}, '56': {'f1-score': 0.9887640449438202, 'precision': 0.9777777777777777, 'recall': 1.0, 'support': 44}, '57': {'f1-score': 0.3098591549295775, 'precision': 0.4782608695652174, 'recall': 0.22916666666666666, 'support': 48}, '58': {'f1-score': 0.52, 'precision': 0.46017699115044247, 'recall': 0.5977011494252874, 'support': 87}, '59': {'f1-score': 0.16216216216216214, 'precision': 0.75, 'recall': 0.09090909090909091, 'support': 33}, '6': {'f1-score': 0.6715328467153284, 'precision': 0.6865671641791045, 'recall': 0.6571428571428571, 'support': 70}, '60': {'f1-score': 0.5045045045045046, 'precision': 0.34146341463414637, 'recall': 0.9655172413793104, 'support': 58}, '61': {'f1-score': 0.7397260273972601, 'precision': 0.75, 'recall': 0.7297297297297297, 'support': 74}, '62': {'f1-score': 0.5964912280701754, 'precision': 0.918918918918919, 'recall': 0.44155844155844154, 'support': 77}, '63': {'f1-score': 0.9686098654708519, 'precision': 0.9908256880733946, 'recall': 0.9473684210526315, 'support': 114}, '64': {'f1-score': 0.9395348837209302, 'precision': 0.9308755760368663, 'recall': 0.9483568075117371, 'support': 213}, '65': {'f1-score': 0.904, 'precision': 0.9495798319327731, 'recall': 0.8625954198473282, 'support': 131}, '66': {'f1-score': 0.970873786407767, 'precision': 0.975609756097561, 'recall': 0.966183574879227, 'support': 207}, '67': {'f1-score': 0.18181818181818185, 'precision': 0.2222222222222222, 'recall': 0.15384615384615385, 'support': 13}, '68': {'f1-score': 0.7913669064748201, 'precision': 0.7746478873239436, 'recall': 0.8088235294117647, 'support': 68}, '69': {'f1-score': 0.8833333333333333, 'precision': 0.8153846153846154, 'recall': 0.9636363636363636, 'support': 55}, '7': {'f1-score': 0.5080645161290323, 'precision': 0.8181818181818182, 'recall': 0.3684210526315789, 'support': 171}, '70': {'f1-score': 0.6909090909090908, 'precision': 0.5643564356435643, 'recall': 0.890625, 'support': 64}, '71': {'f1-score': 0.0, 'precision': 0.0, 'recall': 0.0, 'support': 1}, '72': {'f1-score': 0.7741935483870968, 'precision': 0.6666666666666666, 'recall': 0.9230769230769231, 'support': 26}, '73': {'f1-score': 0.5492227979274612, 'precision': 0.6162790697674418, 'recall': 0.4953271028037383, 'support': 107}, '74': {'f1-score': 0.7792207792207793, 'precision': 0.8823529411764706, 'recall': 0.6976744186046512, 'support': 43}, '75': {'f1-score': 0.0, 'precision': 0.0, 'recall': 0.0, 'support': 5}, '76': {'f1-score': 0.7843137254901961, 'precision': 0.8, 'recall': 0.7692307692307693, 'support': 52}, '77': {'f1-score': 0.8285714285714286, 'precision': 0.7631578947368421, 'recall': 0.90625, 'support': 32}, '78': {'f1-score': 0.6315789473684211, 'precision': 0.5217391304347826, 'recall': 0.8, 'support': 15}, '79': {'f1-score': 0.96875, 'precision': 0.9393939393939394, 'recall': 1.0, 'support': 31}, '8': {'f1-score': 0.7865168539325844, 'precision': 0.7608695652173914, 'recall': 0.813953488372093, 'support': 43}, '9': {'f1-score': 0.48717948717948717, 'precision': 0.4418604651162791, 'recall': 0.5428571428571428, 'support': 35}, 'accuracy': 0.8039969254419678, 'macro avg': {'f1-score': 0.7294278586502132, 'precision': 0.7369688412181357, 'recall': 0.75231069456249, 'support': 6505}, 'weighted avg': {'f1-score': 0.8014605822663653, 'precision': 0.8221714366960635, 'recall': 0.8039969254419678, 'support': 6505}}
# Create empty dictionary
class_f1_scores = {}
# Loop through classification report items
for k, v in classification_report_dict.items():
if k == "accuracy": # stop once we get to accuracy key
break
else:
# Append class names and f1-scores to new dictionary
class_f1_scores[class_names[int(k)]] = v["f1-score"]
class_f1_scores
{'Bear': 0.6376811594202899, 'Brown bear': 0.7727272727272728, 'Bull': 0.4795321637426901, 'Butterfly': 0.9230769230769231, 'Camel': 0.8727272727272727, 'Canary': 0.5833333333333334, 'Caterpillar': 0.6715328467153284, 'Cattle': 0.5080645161290323, 'Centipede': 0.7865168539325844, 'Cheetah': 0.48717948717948717, 'Chicken': 0.9110320284697508, 'Crab': 0.8549019607843137, 'Crocodile': 0.9452054794520548, 'Deer': 0.8767908309455587, 'Duck': 0.6357615894039735, 'Eagle': 0.9325842696629213, 'Elephant': 0.8571428571428571, 'Fish': 0.8032786885245903, 'Fox': 0.9444444444444444, 'Frog': 0.9102564102564102, 'Giraffe': 0.8085106382978724, 'Goat': 0.7, 'Goldfish': 0.5405405405405406, 'Goose': 0.25, 'Hamster': 0.34951456310679613, 'Harbor seal': 0.5, 'Hedgehog': 0.968421052631579, 'Hippopotamus': 0.7441860465116279, 'Horse': 0.7763975155279503, 'Jaguar': 0.3125, 'Jellyfish': 0.9518716577540107, 'Kangaroo': 0.8636363636363636, 'Koala': 0.8627450980392156, 'Ladybug': 0.9315068493150684, 'Leopard': 0.4957264957264957, 'Lion': 0.9458128078817734, 'Lizard': 0.9402985074626865, 'Lynx': 0.8125, 'Magpie': 0.8421052631578948, 'Monkey': 0.958139534883721, 'Moths and butterflies': 0.5714285714285714, 'Mouse': 0.5856353591160222, 'Mule': 0.6129032258064515, 'Ostrich': 0.9673202614379085, 'Otter': 0.7521367521367521, 'Owl': 0.8970588235294117, 'Panda': 0.8947368421052632, 'Parrot': 0.9371428571428572, 'Penguin': 0.8592592592592593, 'Pig': 0.8817204301075269, 'Polar bear': 0.8925619834710744, 'Rabbit': 0.9176470588235294, 'Raccoon': 0.9166666666666665, 'Raven': 0.8275862068965517, 'Red panda': 0.8282828282828282, 'Rhinoceros': 0.596153846153846, 'Scorpion': 0.9887640449438202, 'Sea lion': 0.3098591549295775, 'Sea turtle': 0.52, 'Seahorse': 0.16216216216216214, 'Shark': 0.5045045045045046, 'Sheep': 0.7397260273972601, 'Shrimp': 0.5964912280701754, 'Snail': 0.9686098654708519, 'Snake': 0.9395348837209302, 'Sparrow': 0.904, 'Spider': 0.970873786407767, 'Squid': 0.18181818181818185, 'Squirrel': 0.7913669064748201, 'Starfish': 0.8833333333333333, 'Swan': 0.6909090909090908, 'Tick': 0.0, 'Tiger': 0.7741935483870968, 'Tortoise': 0.5492227979274612, 'Turkey': 0.7792207792207793, 'Turtle': 0.0, 'Whale': 0.7843137254901961, 'Woodpecker': 0.8285714285714286, 'Worm': 0.6315789473684211, 'Zebra': 0.96875}
# Turn f1-scores into dataframe for visualization
f1_scores = pd.DataFrame({"class_name": list(class_f1_scores.keys()),
"f1-score": list(class_f1_scores.values())}).sort_values("f1-score", ascending=False)
f1_scores
class_name | f1-score | |
---|---|---|
56 | Scorpion | 0.988764 |
66 | Spider | 0.970874 |
79 | Zebra | 0.968750 |
63 | Snail | 0.968610 |
26 | Hedgehog | 0.968421 |
... | ... | ... |
23 | Goose | 0.250000 |
67 | Squid | 0.181818 |
59 | Seahorse | 0.162162 |
75 | Turtle | 0.000000 |
71 | Tick | 0.000000 |
80 rows × 2 columns
Plotting F-1 scores¶
fig, ax = plt.subplots(figsize=(12, 25))
scores = ax.barh(range(len(f1_scores)), f1_scores["f1-score"].values)
ax.set_yticks(range(len(f1_scores)))
ax.set_yticklabels(list(f1_scores["class_name"]))
ax.set_xlabel("f1-score")
ax.set_title("F1-Scores for 80 Different Classes")
ax.invert_yaxis(); # reverse the order
def autolabel(rects):
"""
Attach a text label above each bar displaying its height (it's value).
"""
for rect in rects:
width = rect.get_width()
ax.text(1.03*width, rect.get_y() + rect.get_height()/1.5,
f"{width:.2f}",
ha='center', va='bottom')
autolabel(scores)
Predictions¶
def load_and_prep_image(filename, img_shape=224, scale=True):
"""
Reads in an image from filename, turns it into a tensor and reshapes into
(224, 224, 3).
Parameters
----------
filename (str): string filename of target image
img_shape (int): size to resize target image to, default 224
scale (bool): whether to scale pixel values to range(0, 1), default True
"""
# Read in the image
img = tf.io.read_file(filename)
# Decode it into a tensor
img = tf.io.decode_image(img)
# Resize the image
img = tf.image.resize(img, [img_shape, img_shape])
if scale:
# Rescale the image (get all values between 0 and 1)
return img/255.
else:
return img
Confused predictions¶
plt.figure(figsize=(17, 10))
for i in range(3):
# Choose a random image from a random class
class_name = random.choice(class_names)
filename = random.choice(os.listdir(test_dir + "/" + class_name))
filepath = test_dir + "/" + class_name + "/" + filename
# Load the image and make predictions
img = load_and_prep_image(filepath, scale=False) # don't scale images for EfficientNet predictions
pred_prob = model.predict(tf.expand_dims(img, axis=0)) # model accepts tensors of shape [None, 224, 224, 3]
pred_class = class_names[pred_prob.argmax()] # find the predicted class
# Plot the image(s)
plt.subplot(1, 3, i+1)
plt.imshow(img/255.)
if class_name == pred_class: # Change the color of text based on whether prediction is right or wrong
title_color = "g"
else:
title_color = "r"
plt.title(f"actual: {class_name}, pred: {pred_class}, prob: {pred_prob.max():.2f}", c=title_color)
plt.axis(False);
arr=[]
for el in class_names:
path = "/content/test"
basepath = os.path.join(path, el)
for fname in os.listdir(basepath):
path = os.path.join(basepath, fname)
if not os.path.isdir(path):
# skip directories
arr.append(path)
continue
plt.figure(figsize=(17, 10))
for i in range(9):
# Choose a random image from a random class
filepath = random.choice(arr)
class_name = filepath.split("/")[3]
# Load the image and make predictions
img = load_and_prep_image(filepath, scale=False) # don't scale images for EfficientNet predictions
pred_prob = model.predict(tf.expand_dims(img, axis=0)) # model accepts tensors of shape [None, 224, 224, 3]
pred_class = class_names[pred_prob.argmax()] # find the predicted class
# Plot the image(s)
plt.subplot(3, 3, i+1)
plt.imshow(img/255.)
if class_name == pred_class: # Change the color of text based on whether prediction is right or wrong
title_color = "g"
else:
title_color = "r"
plt.title(f"actual: {class_name}, pred: {pred_class}, prob: {pred_prob.max():.2f}", c=title_color)
plt.axis(False);
# Get the filenames of all of our test data
filepaths = []
for filepath in test_data.list_files("/content/test/*/*.jpg",
shuffle=False):
filepaths.append(filepath.numpy())
filepaths[:10]
[b'/content/test/Bear/0df78ee76bafd3a9.jpg', b'/content/test/Bear/0f899aca6d0fb6e1.jpg', b'/content/test/Bear/1cca48c57103a42c.jpg', b'/content/test/Bear/1fa809bf6cf5ea36.jpg', b'/content/test/Bear/200046eca85cd992.jpg', b'/content/test/Bear/23bf858cb1d0ef63.jpg', b'/content/test/Bear/23d1d39d81d411da.jpg', b'/content/test/Bear/322e901e952ea866.jpg', b'/content/test/Bear/3d77555f2ede0b38.jpg', b'/content/test/Bear/3df2b6a098712fee.jpg']
pred_df = pd.DataFrame({"img_path": filepaths,
"y_true": y_labels,
"y_pred": pred_classes,
"pred_conf": pred_probs.max(axis=1), # get the maximum prediction probability value
"y_true_classname": [class_names[i] for i in y_labels],
"y_pred_classname": [class_names[i] for i in pred_classes]})
pred_df.head()
img_path | y_true | y_pred | pred_conf | y_true_classname | y_pred_classname | |
---|---|---|---|---|---|---|
0 | b'/content/test/Bear/0df78ee76bafd3a9.jpg' | 0 | 0 | 0.995746 | Bear | Bear |
1 | b'/content/test/Bear/0f899aca6d0fb6e1.jpg' | 0 | 0 | 0.959899 | Bear | Bear |
2 | b'/content/test/Bear/1cca48c57103a42c.jpg' | 0 | 54 | 0.998436 | Bear | Red panda |
3 | b'/content/test/Bear/1fa809bf6cf5ea36.jpg' | 0 | 54 | 0.999917 | Bear | Red panda |
4 | b'/content/test/Bear/200046eca85cd992.jpg' | 0 | 0 | 0.566307 | Bear | Bear |
# check for pred
pred_df["pred_correct"] = pred_df["y_true"] == pred_df["y_pred"]
pred_df.head()
img_path | y_true | y_pred | pred_conf | y_true_classname | y_pred_classname | pred_correct | |
---|---|---|---|---|---|---|---|
0 | b'/content/test/Bear/0df78ee76bafd3a9.jpg' | 0 | 0 | 0.995746 | Bear | Bear | True |
1 | b'/content/test/Bear/0f899aca6d0fb6e1.jpg' | 0 | 0 | 0.959899 | Bear | Bear | True |
2 | b'/content/test/Bear/1cca48c57103a42c.jpg' | 0 | 54 | 0.998436 | Bear | Red panda | False |
3 | b'/content/test/Bear/1fa809bf6cf5ea36.jpg' | 0 | 54 | 0.999917 | Bear | Red panda | False |
4 | b'/content/test/Bear/200046eca85cd992.jpg' | 0 | 0 | 0.566307 | Bear | Bear | True |
# Get the top 100 wrong examples
top_100_wrong = pred_df[pred_df["pred_correct"] == False].sort_values("pred_conf", ascending=False)[:100]
top_100_wrong.head(20)
img_path | y_true | y_pred | pred_conf | y_true_classname | y_pred_classname | pred_correct | |
---|---|---|---|---|---|---|---|
5702 | b'/content/test/Sparrow/4ecd53011e47f224.jpg' | 65 | 77 | 0.999977 | Sparrow | Woodpecker | False |
37 | b'/content/test/Bear/f0cd1050b09dd625.jpg' | 0 | 54 | 0.999972 | Bear | Red panda | False |
3 | b'/content/test/Bear/1fa809bf6cf5ea36.jpg' | 0 | 54 | 0.999917 | Bear | Red panda | False |
1906 | b'/content/test/Fish/ad6768f15203ed1a.jpg' | 17 | 60 | 0.999916 | Fish | Shark | False |
9 | b'/content/test/Bear/3df2b6a098712fee.jpg' | 0 | 54 | 0.999850 | Bear | Red panda | False |
4715 | b'/content/test/Raccoon/3df2b6a098712fee.jpg' | 52 | 54 | 0.999850 | Raccoon | Red panda | False |
2264 | b'/content/test/Giraffe/8e989aae24d45230.jpg' | 20 | 79 | 0.999843 | Giraffe | Zebra | False |
1889 | b'/content/test/Fish/a704ec4be2217b23.jpg' | 17 | 60 | 0.999791 | Fish | Shark | False |
1617 | b'/content/test/Fish/39eb679fc9ba8a0f.jpg' | 17 | 60 | 0.999693 | Fish | Shark | False |
1558 | b'/content/test/Fish/237302d5580cc2fd.jpg' | 17 | 16 | 0.999687 | Fish | Elephant | False |
20 | b'/content/test/Bear/9e411cef88b11bb7.jpg' | 0 | 54 | 0.999662 | Bear | Red panda | False |
2620 | b'/content/test/Hippopotamus/6defd81749de10a0.... | 27 | 55 | 0.999642 | Hippopotamus | Rhinoceros | False |
2014 | b'/content/test/Fish/db3798bf28e1a847.jpg' | 17 | 60 | 0.999624 | Fish | Shark | False |
1546 | b'/content/test/Fish/1d59db2495553468.jpg' | 17 | 60 | 0.999597 | Fish | Shark | False |
1811 | b'/content/test/Fish/86d0b8f94d5b0024.jpg' | 17 | 60 | 0.999532 | Fish | Shark | False |
4178 | b'/content/test/Panda/c14db25ce91e4398.jpg' | 46 | 54 | 0.999522 | Panda | Red panda | False |
1894 | b'/content/test/Fish/a90853d412ac901b.jpg' | 17 | 60 | 0.999499 | Fish | Shark | False |
7 | b'/content/test/Bear/322e901e952ea866.jpg' | 0 | 54 | 0.999329 | Bear | Red panda | False |
5665 | b'/content/test/Sparrow/14fce647401ac7c6.jpg' | 65 | 5 | 0.999253 | Sparrow | Canary | False |
8 | b'/content/test/Bear/3d77555f2ede0b38.jpg' | 0 | 54 | 0.998597 | Bear | Red panda | False |
Wrong Predictions¶
# Visualize some of the most wrong examples
images_to_view = 9
start_index = 10 # change the start index to view more
plt.figure(figsize=(15, 10))
for i, row in enumerate(top_100_wrong[start_index:start_index+images_to_view].itertuples()):
plt.subplot(3, 3, i+1)
img = load_and_prep_image(row[1], scale=True)
_, _, _, _, pred_prob, y_true, y_pred, _ = row # only interested in a few parameters of each row
plt.imshow(img)
plt.title(f"actual: {y_true}, pred: {y_pred} \nprob: {pred_prob:.2f}", color="red")
plt.axis(False)