It is a well known fact that plt.subplots() returns an array of plots called axes. The dimension of the array is nrows by ncols.
fig, axes = plt.subplots(nrows, ncols, figsize=(x,y))
Now you can access each individual plot within this array using indexes like axes[i,j].
Following is an example of plotting some cifar10 images using the out of the box keras dataset.
import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
(train_images,train_labels),(test_images,test_labels)= \
tf.keras.datasets.cifar10.load_data()
nrows = 4
ncols = 5
import numpy as np
import random
fig, axes = plt.subplots(nrows, ncols, figsize=(15,15))
print(axes.shape)
n_training = len(train_images)
for i in np.arange(0, nrows * ncols) :
index = np.random.randint(0, n_training)
axes[int(i/ncols) , int(i%ncols)].imshow(train_images[index])
axes[int(i/ncols) , int(i%ncols)].set_title(train_labels[index])
axes[int(i/ncols) , int(i%ncols)].axis("off")
plt.subplots_adjust(hspace = 0.2)
plt.subplots_adjust(wspace = 0.2)
Here, the arange returns a continuous array of numbers from 0,..,19. We are deriving the
row number and column number by using the logic i/cols , i%cols.
We can simplify the above by flattening the axes array either by using numpy flatten or ravel
functions. flatten returns a new copy whereas ravel returns a reference, thus ravel
proves out to be more memory and speed efficient plus it gives the advantage of shallow
copy namely changes done in derived objects reflect in original object.
Below is the code which accomplishes the same thing as above, but using ravel:
import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
(train_images,train_labels),(test_images,test_labels)= \
tf.keras.datasets.cifar10.load_data()
nrows = 4
ncols = 5
import numpy as np
import random
fig, axes = plt.subplots(nrows, ncols, figsize=(15,15))
axes = axes.ravel()
print(axes.shape)
n_training = len(train_images)
for i in np.arange(0, nrows * ncols) :
index = np.random.randint(0, n_training)
axes[i].imshow(train_images[index])
axes[i].set_title(train_labels[index])
axes[i].axis("off")
plt.subplots_adjust(hspace = 0.2)
plt.subplots_adjust(wspace = 0.2)
You can observe that the axes can be accessed now in a linear mode, without requiring
any odd looking logic for their indices. The code is more simplistic and readable.
Below is the combined code if you want to try out. Just uncomment the single or double
comments to check for either approach.
import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
(train_images,train_labels),(test_images,test_labels)= \
tf.keras.datasets.cifar10.load_data()
nrows = 4
ncols = 5
import numpy as np
import random
fig, axes = plt.subplots(nrows, ncols, figsize=(15,15))
# # print(axes.shape)
# axes = axes.ravel()
# print(axes.shape)
n_training = len(train_images)
for i in np.arange(0, nrows * ncols) :
index = np.random.randint(0, n_training)
# # axes[int(i/ncols) , int(i%ncols)].imshow(train_images[index])
# # axes[int(i/ncols) , int(i%ncols)].set_title(train_labels[index])
# # axes[int(i/ncols) , int(i%ncols)].axis("off")
# axes[i].imshow(train_images[index])
# axes[i].set_title(train_labels[index])
# axes[i].axis("off")
plt.subplots_adjust(hspace = 0.2)
plt.subplots_adjust(wspace = 0.2)
No comments:
Post a Comment