Monday, May 31, 2021

Is there any difference between tf.keras.Sequential() and tf.keras.models.Sequential() ?

 There doesn't seem to be any. 

tf.keras.Sequential() seems to be the latest way of creating sequential models, as the newer google tutorials use it. However, tf.keras.models.Sequential() is still going to be there because the basic google tutorials still use it. 

Not much info is available, but you can refer this SO post.

TO DO

 1. https://www.pyimagesearch.com/2019/10/28/3-ways-to-create-a-keras-model-with-tensorflow-2-0-sequential-functional-and-model-subclassing/


Sunday, May 30, 2021

TypeError: 'int' object is not iterable

I am starting to believe that the word "TypeError" is used in python with two meanings, first it indicates an invalid pair of data type and operation. Secondly it says "Oh a typo is encountered, looks like you forgot to type in something" and hence the TypeError !

I forgot to type range() in a for loop and it immediately reminded me about by giving an error 

TypeError: 'int' object is not iterable


😃😃

You will also get it if you forget to type range in a for loop : 

x = 5
for i in x:              #since x is obviously not iterable

We probably intended something like below : 

x = 5 
for i in range(x) : 

Thursday, May 27, 2021

In SGD one sample is one batch

I had a confusion about SGD and many resources on the net added to that confusion. It was about SGD. From the view point of statistic, the term stochastic is used to indicate a random sample out of multiple samples. So one can easily confuse that SGD is faster because it randomly picks one sample out of a batch. While this is still correct for SGD, in practice SGD applies gradients immediately after each sample is processed. The reason for this is that SGD treats each sample as a batch. This was cleared to me by Jason Brownlee when I asked a question to him. Many thanks to Jason!

https://machinelearningmastery.com/gentle-introduction-mini-batch-gradient-descent-configure-batch-size/#comment-609705

 








Turning any CNN image classifier into an object detector with Keras, TensorFlow, and OpenCV


OpenCV Selective Search for Object Detection



Region proposal object detection with OpenCV, Keras, and TensorFlow




R-CNN object detection with Keras, TensorFlow, and Deep Learning

https://www.pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/


https://github.com/AarohiSingla/Faster-R-CNN/blob/main/data_prep.ipynb

Wednesday, May 26, 2021

 Analytics  Vidhya Blood Cell Detection articles, three parts 



A Step-by-Step Introduction to the Basic Object Detection Algorithms (Part 1)

https://www.analyticsvidhya.com/blog/2018/10/a-step-by-step-introduction-to-the-basic-object-detection-algorithms-part-1/#:~:text=In%20Fast%20RCNN%2C%20we%20feed,into%20a%20fully%20connected%20network.



A Practical Implementation of the Faster R-CNN Algorithm for Object Detection (Part 2 – with Python codes)

https://www.analyticsvidhya.com/blog/2018/11/implementation-faster-r-cnn-python-object-detection/?utm_source=blog&utm_medium=a-step-by-step-introduction-to-the-basic-object-detection-algorithms-part-1


A Practical Guide to Object Detection using the Popular YOLO Framework – Part III (with Python codes)

https://www.analyticsvidhya.com/blog/2018/12/practical-guide-object-detection-yolo-framewor-python/

Revisted MNIST DCGAN : somewhat simplified now

 import tensorflow as tf 

from tensorflow import keras 
from keras import layers
import matplotlib.pyplot as plt 
import time
# rm -r "/content/ckpoint"
no_of_epochs_to_checkpoint = 2
no_of_epochs = 10
no_of_examples =16
no_of_dimensions_for_noise =100
ckpoint_prefix = "/content/ckpoint/ckpt"
BATCH_SIZE = 256 
BUFFER_SIZE = 60000


(train_images , train_labels) , (_,_) = tf.keras.datasets.mnist.load_data()

train_images = train_images.reshape(train_images.shape[0] , 28 , 28 , 1).astype("float32")
train_images = (train_images -127.5) /127.5

train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)

checkpoint = tf.train.Checkpoint()

def get_generator() : 
  model = keras.models.Sequential()
  model.add(layers.Dense(7*7*256 , use_bias = False , input_shape = (100,)))
  model.add(layers.BatchNormalization())
  model.add(layers.LeakyReLU())

  model.add(layers.Reshape((7 ,7256)))

  model.add(layers.Conv2DTranspose(1285, strides = 1 , use_bias = False , padding = "same"))
  model.add(layers.BatchNormalization())
  model.add(layers.LeakyReLU())

  model.add(layers.Conv2DTranspose(64 , 5, strides = 2 , use_bias = False , padding = "same"))
  model.add(layers.BatchNormalization())
  model.add(layers.LeakyReLU())

  model.add(layers.Conv2DTranspose(15, strides = 2 , use_bias= False , padding = "same" , activation = "tanh"))

  return model
def get_discriminator() : 
  model = keras.models.Sequential()

  model.add(layers.Conv2D(64 , 5, strides = 2 , padding = "same" , input_shape = (28,28,1)))
  model.add(layers.LeakyReLU())
  model.add(layers.Dropout(0.3))

  model.add(layers.Conv2D(128 , 5, strides = 2 , padding = "same"))
  model.add(layers.LeakyReLU())
  model.add(layers.Dropout(0.3))

  model.add(layers.Flatten())
  model.add(layers.Dense(1))

  return model  
  
generator = get_generator()
discriminator = get_discriminator()

#LOSSES
cross_entropy = keras.losses.BinaryCrossentropy(from_logits = True)

def generator_loss(fake_output) : 
  return cross_entropy(tf.ones_like(fake_output) , fake_output)

def discriminator_loss(real_outputfake_output) : 
  real_loss = cross_entropy(tf.ones_like(real_output) , real_output)
  fake_loss = cross_entropy(tf.zeros_like(fake_output) , fake_output)
  return real_loss + fake_loss
#LOSSES

#OPTIMIZER
gen_optimizer = tf.keras.optimizers.Adam(1e-4)
disc_optimizer = tf.keras.optimizers.Adam(1e-4)
#OPTIMIZER



def generate_and_save_files_after_epoch(generator , epoch_number) : 
  seed = tf.random.normal([no_of_examples, no_of_dimensions_for_noise])
  predictions = generator(seed , training = False)

  plt.figure(figsize = (15,15) )

  for i in range(predictions.shape[0]) : 
    plt.subplot(4,4,i+1)
    plt.imshow(predictions[i,:,:,0] *127.5  + 127.5 )
    plt.axis("off")
  plt.savefig( "/content/epochwiseoutput/" + "epoch_{:04d}".format(epoch_number))
  plt.show()

  return 0 

def train_step(imageset) : 
  noise = tf.random.normal([BATCH_SIZE, no_of_dimensions_for_noise])

  with tf.GradientTape() as gen_tape , tf.GradientTape() as disc_tape : 
    fake_images = generator(noise , training = True)
    real_output = discriminator(imageset , training = True)
    fake_output = discriminator(fake_images , training = True

    gen_loss = generator_loss(fake_output)
    disc_loss = discriminator_loss(real_output , fake_output)

    gen_gradient = gen_tape.gradient(gen_loss , generator.trainable_variables)
    disc_gradient = disc_tape.gradient(disc_loss , discriminator.trainable_variables)

    gen_optimizer.apply_gradients(zip(gen_gradient , generator.trainable_variables))
    disc_optimizer.apply_gradients(zip(disc_gradient , discriminator.trainable_variables))
  return 0 

def train(batched_training_dataset , no_of_epochs) : 
  for epoch in range(no_of_epochs) : 
    start_time = time.time()
    print("started at {} epoch {:04d}".format(start_time , epoch))
    for batch in batched_training_dataset : 
      train_step(batch)
    if (epoch%no_of_epochs_to_checkpoint == 0 ) : 
      checkpoint.save(file_prefix = ckpoint_prefix)
    generate_and_save_files_after_epoch(generator , epoch + 1)
    print("time taken {} for epoch {:04d}".format(time.time() - start_time , epoch))

  generate_and_save_files_after_epoch(generator , no_of_epochs )

train(train_dataset, no_of_epochs)

import os 
if not (os.path.exists("/content/epochwiseoutput")) :
  os.mkdir("/content/epochwiseoutput")
import imageio 
import glob 

with imageio.get_writer("animatedfile.gif" , mode = "I"as writer : 
  filenames = glob.glob("/content/epochwiseoutput/*.png")
  filenames = sorted(filenames)
  for file in filenames : 
    img = imageio.imread(file)
    writer.append_data(img)
  img = imageio.imread(file)
  writer.append_data(img)

Tuesday, May 25, 2021

Generator for DCGAN

 Following is the code for generator for MNIST DCGAN 


import tensorflow as tf 
from tensorflow import keras 
from keras import layers 
import matplotlib.pyplot as plt 



img_height = 40 
img_width  = 80 

model = keras.models.Sequential()

model.add(layers.Dense(img_height/4 * img_width /4 * 256 , use_bias = False
                       input_shape = (100,)))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())

model.add(layers.Reshape(( int(img_height/4) , int(img_width/4) , 256)))


model.add(layers.Conv2DTranspose(128 , 5, strides = 1 , padding = "same" , 
                                 use_bias = False))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())


model.add(layers.Conv2DTranspose(645, strides = 2 , use_bias = False
                                 padding = "same"))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())


model.add(layers.Conv2DTranspose(15, strides = 2 , use_bias = False , 
                                 padding = "same"))


noise = tf.random.normal([1,100])
gen_image = model(noise , training = False)

plt.imshow(gen_image[0, :, :, 0])


Monday, May 24, 2021

Good Info in Difference between Categorical and Sparse Categorical Cross entropy loss functions

https://stackoverflow.com/questions/58565394/what-is-the-difference-between-sparse-categorical-crossentropy-and-categorical-c


Plotting a Confusion Matrix for CIFAR10 using Seaborn

Most of the google tutorials on keras do not show how to display a confusion matrix for

the solution. A confusion matrix can throw a clear light on how the model is performing .

Below is a simple cifar10 solution using keras. Most of the code is similar to any other

cifar10 tensorflow tutorial, except a small number of lines at the end, which plot

confusion matrix. Those lines are marked by comment.




import tensorflow as tf 

from tensorflow import keras 
import matplotlib.pyplot as plt 

(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.cifar10.load_data()

input_shape = train_images.shape[1:]

model  = tf.keras.models.Sequential()
model.add(tf.keras.layers.Conv2D(32 , 3, activation="relu" , input_shape = input_shape))
model.add(tf.keras.layers.Conv2D(32 , 3, activation="relu"))
model.add(tf.keras.layers.MaxPooling2D())

model.add(tf.keras.layers.Conv2D(64 , 3, activation="relu"))
model.add(tf.keras.layers.Conv2D(64 , 3, activation="relu"))
model.add(tf.keras.layers.MaxPooling2D())



model.add(tf.keras.layers.Flatten())

model.add(tf.keras.layers.Dense(1024, activation = "relu"))
model.add(tf.keras.layers.Dense(10, activation = "softmax"))

model.compile(
    optimizer = tf.keras.optimizers.Adam() , 
    loss = tf.keras.losses.SparseCategoricalCrossentropy(), 
    metrics = ["accuracy"]
)


epochs = 20
history  =  model.fit(
    train_images, 
    train_labels, 
    validation_data = (test_images, test_labels),
    epochs = epochs
)


plt.figure(figsize = (8,8))

plt.subplot(1,2,1)
plt.plot(range(epochs) , history.history["accuracy"] , "r" , label = "Training Accuracy")
plt.plot(range(epochs) , history.history["val_accuracy"] , "b" , label = "Validation Accuracy")
plt.legend(loc="upper left")
plt.title("Accuracy")

plt.subplot(1,2,2)
plt.plot(range(epochs) , history.history["loss"] , "r" , label = "Training Loss")
plt.plot(range(epochs) , history.history["val_loss"] , "b" , label = "Validation Loss")
plt.legend(loc="upper right")
plt.title("Loss")

plt.show()


predictions  = model.predict(test_images)

#The following 7 lines are all that is required to plot the confusion matrix.
predictions_for_cm = predictions.argmax(1)

from sklearn.metrics import confusion_matrix
import seaborn as sns
class_names = ["airplane","automobile","bird","cat","deer","dog","frog","horse","ship","truck"]

cm = confusion_matrix(test_labels,predictions_for_cm)
plt.figure(figsize=(8,8))
sns.heatmap(cm, annot=True,  xticklabels=class_names, yticklabels = class_names)













My answer on SO

MatPlotLib : Use of ravel to simplify subplotting

It is a well known fact that plt.subplots() returns an array of plots called axes. The dimension of the array is nrows by ncols. 

    fig, axes = plt.subplots(nrows, ncols, figsize=(x,y))

Now you can access each individual plot within this array using indexes like axes[i,j].

Following is an example of plotting some cifar10 images using the out of the box keras dataset.


import tensorflow as tf 
from tensorflow import keras 
import matplotlib.pyplot as plt 

(train_images,train_labels),(test_images,test_labels)= \
                        tf.keras.datasets.cifar10.load_data()

nrows = 4
ncols = 5

import numpy as np 
import random
fig, axes = plt.subplots(nrows, ncols, figsize=(15,15))
print(axes.shape)

n_training = len(train_images)
for i in np.arange(0, nrows * ncols) : 
  index = np.random.randint(0, n_training)
  axes[int(i/ncols) , int(i%ncols)].imshow(train_images[index])
  axes[int(i/ncols) , int(i%ncols)].set_title(train_labels[index])
  axes[int(i/ncols) , int(i%ncols)].axis("off")
plt.subplots_adjust(hspace = 0.2)
plt.subplots_adjust(wspace = 0.2)


Here, the arange returns a continuous array of numbers from 0,..,19. We are deriving the
row number and column number by using the logic i/cols , i%cols.

We can simplify the above by flattening the axes array either by using numpy flatten or ravel
functions. flatten returns a new copy whereas ravel returns a reference, thus ravel
proves out to be more memory and speed efficient plus it gives the advantage of shallow
copy namely changes done in derived objects reflect in original object.

Below is the code which accomplishes the same thing as above, but using ravel:
import tensorflow as tf 
from tensorflow import keras 
import matplotlib.pyplot as plt 

(train_images,train_labels),(test_images,test_labels)= \
                        tf.keras.datasets.cifar10.load_data()

nrows = 4
ncols = 5

import numpy as np 
import random
fig, axes = plt.subplots(nrows, ncols, figsize=(15,15))
axes = axes.ravel()
print(axes.shape)

n_training = len(train_images)
for i in np.arange(0, nrows * ncols) : 
  index = np.random.randint(0, n_training)
  axes[i].imshow(train_images[index])
  axes[i].set_title(train_labels[index])
  axes[i].axis("off")
plt.subplots_adjust(hspace = 0.2)
plt.subplots_adjust(wspace = 0.2)


You can observe that the axes can be accessed now in a linear mode, without requiring
any odd looking logic for their indices. The code is more simplistic and readable.


Below is the combined code if you want to try out. Just uncomment the single or double
comments to check for either approach.

import tensorflow as tf 
from tensorflow import keras 
import matplotlib.pyplot as plt 

(train_images,train_labels),(test_images,test_labels)= \
                        tf.keras.datasets.cifar10.load_data()

nrows = 4
ncols = 5

import numpy as np 
import random
fig, axes = plt.subplots(nrows, ncols, figsize=(15,15))
# # print(axes.shape)
# axes = axes.ravel()
# print(axes.shape)

n_training = len(train_images)
for i in np.arange(0, nrows * ncols) : 
  index = np.random.randint(0, n_training)
  # # axes[int(i/ncols) , int(i%ncols)].imshow(train_images[index])
  # # axes[int(i/ncols) , int(i%ncols)].set_title(train_labels[index])
  # # axes[int(i/ncols) , int(i%ncols)].axis("off")
  # axes[i].imshow(train_images[index])
  # axes[i].set_title(train_labels[index])
  # axes[i].axis("off")
plt.subplots_adjust(hspace = 0.2)
plt.subplots_adjust(wspace = 0.2)

Friday, May 21, 2021

Why 7*7*256 as the number of units of input dense layer ?

This has reference to the  google's tensorflow mnist dcgan tutorial. 

The first dense layer at the input is configured to have number of units 7*7*256 and we are not able to find an explanation for this in the tutorial. 

My impression about this is as follows: 

Remember we want a 28x28 grey scale image as output of the generator. That means the required output shape is (None, 28, 28, 1) where first entity is batch size, which is none if a single image is required. 

Now note that a Conv2DTranspose layer with strides=(2,2) essentially upsamples the input shape by a factor of 2, it doubles it. Secondly the number of filters of Conv2DTranspose layer become the channels, if I want the output to be grey scale, the number of filters should be one. Thus, if I want (None, 28,28,1) at the output of Conv2DTranspose layer, the shape of its input should be (None, 14,14,x). (No if channels is rather decided by current layer, x can be any value at input).

Suppose I am again putting one more Conv2DTranspose layer with strides=(2,2), obviously the input to this layer should be (None, 7,7,x) where x is number of filters. 

In general, if a batch of images of size  (h, w) is input to a Conv2DTranspose layer with strides = (2,2), its output will have shape (batch_size, 2*h, 2*w , no_of_filters) 

The  google tutorial further puts one more Conv2DTranspose layer  [but with strides =(1,1) so it does not have the upsampling effect] and a Dense layer on top of it. These layers are not doing upsampling so the input shape remains 7x7. 7x7 is the image shape here. The first dense layer's output is in flattened shape, so if it has 7*7*x units, we can always reshape it to get an  (7,7,x) image. 

This is theory behind that 7*7*x number of units of first dense layer. The value 256 they have used is an arbitrary value which they might have derived empirically, I guess. 


Another question on stack overflow

https://stackoverflow.com/questions/66844444/tensorflow-tutorial-dcgan-models-for-different-size-images


My answer to a question on SO

https://stackoverflow.com/questions/56081975/output-dimension-of-reshape-layer


Fun with Matplotlib : Creating a GIF of randomly generated images

import matplotlib.pyplot as plt
import numpy 
from numpy import random
import os 
import imageio
import glob

#rm -r  /content/images

images_dir = "/content/images"
if (os.path.exists(images_dir) == False): 
  os.mkdir(images_dir)

num_of_files = 10
cmaplist = plt.colormaps()

for i in range(num_of_files) : 
  random_cmap_no = random.randint(0, len(cmaplist)-1)
  img = random.random((50,50))
  plt.text(10, 10,"{:03d} ".format(i) + cmaplist[random_cmap_no], 
           bbox=dict( edgecolor='red', linewidth=2, fc = (random.random(),random.random(),0.8)))
  plt.imshow(img,cmap = plt.get_cmap(cmaplist[random_cmap_no]) , interpolation="nearest")
  filename = "{}/{:03d}.png".format(images_dir, i)
  plt.savefig(filename)
  plt.show()
  
gif_file = images_dir + "/" + "myimages.gif"

filenames = glob.glob( images_dir + "/*.png")
filenames = sorted(filenames)
images  = []
for filename in filenames:
  images.append(imageio.imread(filename))
imageio.mimsave(gif_file, images)

#another approach found in a google tutorial
# with imageio.get_writer(gif_file, mode="I") as writer:
#   filenames = glob.glob( images_dir + "/*.png")
#   filenames = sorted(filenames)
#   for filename in filenames:
#     image = imageio.imread(filename)
#     writer.append_data(image)
#   image = imageio.imread(filename)
#   writer.append_data(image)

Fun with Matplotlib : Creating attractive text arrangement

 

plt.text(1.1,0.9"RMSProp", size=30, rotation = 25.,
         ha="right", va="top",
         bbox=dict(boxstyle="square",
                   ec=(1., 0.50.5),
                   fc=(1., 0.80.8),
                   )
         )


plt.text(0.31.0"momentum", size=30,rotation=-25.,
         ha="right", va="top",
         bbox=dict(boxstyle="square",
                   ec=(1., 0.50.5),
                   fc=(1., 0.80.8),
                   )
         )

plt.text(0.70.6"Adam", size=50,
         ha="right", va="top",
         bbox=dict(boxstyle="roundtooth",
                   ec=(1., 0.50.5),
                   fc=(1., 0.80.8),
                   )
         )
plt.axis("off")
plt.show()

Thursday, May 20, 2021


SyntaxError: positional argument follows keyword argument


Wrong:
model.add(tf.keras.layers.Dense(units = 7*7*256, use_bias = False, input_shape(100,)))



Right:
model.add(tf.keras.layers.Dense(units = 7*7*256, use_bias = False, input_shape=(100,)))

Wednesday, May 19, 2021

 

MomentumUses exponential moving average of current and previous gradient
AdagradUses squared current and previous gradients and uses its sqrt in the divisor of lr 
RMSPropUses exponential moving  average of squares of current and previous gradients and uses its sqrt in the divisor of lr
Adam1. uses exponential moving avergages as RMSProp in divisor of lr
2. Uses exponential average of current and previous gradients in multiplier of lr

 MIT Press Book


https://www.deeplearningbook.org/

Available fully online

By 

Ian Goodfellow, Yoshua Bengio and Aaron Courville



 Andrew Ng Links


https://www.youtube.com/watch?v=E-aX2yK3Uws

1.2.1 Model Representation by Andrew Ng


https://www.youtube.com/watch?v=lM49Mz3mIXE

1.2.2 Cost Function by Andrew Ng


https://www.youtube.com/watch?v=CFeCaFUnhlM

1.2.3 Cost Function Intuition I by Andrew Ng


https://www.youtube.com/watch?v=iRQpg_CZNW4

1.2.4 Cost Function Intuition II by Andrew Ng


https://www.youtube.com/watch?v=yFPLyDwVifc

1.2.5 Gradient Descent



https://www.youtube.com/watch?v=rIVLE3condE

1.2.6 Gradient Descent Intuition


https://www.youtube.com/watch?v=q0pm-ZweMfk

1.2.7 Gradient Descent For Linear Regression






https://www.youtube.com/watch?v=yR2ipCoFvNo

Lecture 2.3 — Linear Regression With One Variable | Cost Function Intuition #1 | Andrew Ng

https://www.youtube.com/watch?v=0kns1gXLYg4

Lecture 2.4 — Linear Regression With One Variable | Cost Function Intuition #2 | Andrew Ng

https://www.youtube.com/watch?v=F6GSRDoB-Cg

Lecture 2.5 — Linear Regression With One Variable | Gradient Descent — [ Andrew Ng]




https://www.youtube.com/playlist?list=PLpFsSf5Dm-pd5d3rjNtIXUHT-v7bdaEIe


Another explanation here is also good and list downs the formulae as per popular internet posts today, but it needs to be verified for authenticity, because there seem to be some differences, for example the definition of SGD with momentum in this link uses exponential averages of gradients where as some people do not use it, such as this one.

Monday, May 17, 2021

Keras Callbacks


Keras Callback is an object that can perform actions at various stages of training. 

It can be used for various things like logging, model saving, early stopping, etc. 


The points at which you can insert a callback are: 

1. Global points : On the begining and end of training/testing/prediction

  • on_(train|test|predict)_begin(self, logs=None)
  • on_(train|test|predict)_end(self, logs=None)

2. Batch level points: 

  • on_(train|test|predict)_batch_begin(self, batch, logs=None)
  • on_(train|test|predict)_batch_end(self, batch, logs=None)

3. Epoch level points:

  • on_epoch_begin(self, epoch, logs=None)
  • on_epoch_end(self, epoch, logs=None)


Apart from this, keras also provides built-in callbacks:

  • ModelCheckpoint
  • TensorBoard
  • EarlyStopping
  • LearningRateScheduler
  • ReduceLROnPlateau
  • RemoteMonitor
  • LambdaCallback
  • TerminateOnNaN
  • CSVLogger
  • ProgbarLogger


 Three types of Differentiation Algorithms


Symbolic Differentiation 

Numeric Differentiation

Automatic Differentiation

Sunday, May 16, 2021

Gradient Tape Basic Tutorial

Gradient Tape is tensorflow's automatic differentiation API.

GradientTape allows you to calculate and track gradient of every differentiable tensorflow operation.  

GradientTape allows you to create custom training loops.

As an example, consider a linear equation y = 4x -5.
Here 4 is the weight and -5 is the bias. 
Let us create a custom training loop for this eqaution and see if it is able to guess the weight and bias.

import numpy as np 

import tensorflow as tf

import random 


x = np.array([-2, -1, 0 , 1,2,4,5,6], dtype=float)

y = 4* x - 5 


print(x)

print(y) 



#define weight and bias

w = tf.Variable(random.random(), trainable = True)

b = tf.Variable(random.random(), trainable = True)


#simple loss function

def simple_loss(y_groundtruth, y_predicted) :

  return tf.abs(y_groundtruth -y_predicted )



#lr

lr = 0.001


def fit_function(x_groundtruth , y_groundtruth) : 

  with tf.GradientTape(persistent = True) as tape : 

    y_predicted = w * x_groundtruth + b 

    loss  =  simple_loss(y_groundtruth , y_predicted)    


  w_gradient = tape.gradient(loss , w)

  b_gradient = tape.gradient(loss , b) 


  w.assign_sub(w_gradient * lr)

  b.assign_sub(b_gradient * lr)



for _ in range(2000) : 

    fit_function(x, y)


#w and b are tf.Variable objects, printing them directly causes the 

# objects to be printed in <object> syntax. hence call the numpy method   

print("Expected weight: 4; Predicted weight: {}".format(w.numpy()))

print("Expected bias : -5; Predicted bias : {}".format(b.numpy()))


Output : 

[-2. -1.  0.  1.  2.  4.  5.  6.]

[-13.  -9.  -5.  -1.   3.  11.  15.  19.]

Expected weight: 4; Predicted weight: 3.9907336235046387

Expected bias : -5; Predicted bias : -5.000271320343018

The predictions are pretty close to the ground truths after 2000 epochs.

Saturday, May 15, 2021

 Today completed the coursera project 
Machine Learning Pipelines with Azure ML Studio



https://www.coursera.org/learn/azure-machine-learning-studio-pipeline/ungradedLti/vMzyM/machine-learning-pipelines-with-azure-ml-studio


it is free. if it is not available for free when you are accessing it, you can find a similar project here: 

https://carldesouza.com/creating-an-income-prediction-azure-ml-experiment-in-azure-ml-studio/


The difference between coursera project and above link is that coursera project additionally applies Synthetic minority oversampling technique SMOTE on the data and then compares it with non-SMOTE. But still both the projects have a lots of similarity, so if coursera is not available or free when you are accessing the second one is also ok.


Friday, May 14, 2021

 https://analyticsindiamag.com/transfer-learning-using-tensorflow-keras/

https://analyticsindiamag.com/computer-vision-using-tensorflow-keras/

 Datacamp numpy cheat sheet looks like a good summary of numpy array info


https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Numpy_Python_Cheat_Sheet.pdf

 Numpy  slicing pattern


The general numpy slicing pattern is : 

        array[rows_start : rows_end + 1 : rows_step , col_start : col_end + 1: col_step]

default start is 0

default end is length of array

default step is 1 

each of the above is optional, except the first colon [:] 



mydata = np.array( [ [1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]])
print(mydata)
print(mydata[:])  #all parameters left to default, will print full array , same as print(mydata)
print("\n\nOdd rows only \n{}\n".format(mydata[0::2]))
print("\n\nOdd columns only \n{}\n".format(mydata[:,0::2]))
print("\n\nEven rows and even columns \n{}\n".format(mydata[1::2,1::2]))
print("\n\nFirst two columns only \n{}\n".format(mydata[:,:2])) # [:,:2] => [:,0:2:1]
print("\n\nOnly second column of all rows\n{}\n".format(mydata[:,2])) #specific column
print("\n\nAll columns from second column onwards\n{}\n".format(mydata[:,2:])) # [:,:2] => [:,2:(len):1]
# note the difference between [:,2] and [:,2:] in above two cases

print("-------------------------Array Attributes----------------------")
#examine the array attributes
# Print out memory address
print("Memory Address: {} ".format(mydata.data))

# Print out the shape
print("Shape: {} ".format(mydata.shape))
# Print out the data type
print("Data Type: {} ".format(mydata.dtype))
# Print out the stride
print("Strides: {}".format(mydata.strides))
# Print the number of dimensions
print("Number of Dimensions: {}".format(mydata.ndim))

# Print the number of elements
print("Number of elements: {}".format(mydata.size))
# Print information about memory layout
print("Flags: {}".format(mydata.flags))
# Print the length of one array element in bytes
print("Size of single array element:{}".format(mydata.itemsize))
# Print the total consumed bytes by all elements
print("Total consumed bytes: {}".format(mydata.nbytes))






Output:

[[ 1 2 3 4] [ 5 6 7 8] [ 9 10 11 12] [13 14 15 16]] [[ 1 2 3 4] [ 5 6 7 8] [ 9 10 11 12] [13 14 15 16]] Odd rows only [[ 1 2 3 4] [ 9 10 11 12]] Odd columns only [[ 1 3] [ 5 7] [ 9 11] [13 15]] Even rows and even columns [[ 6 8] [14 16]] First two columns only [[ 1 2] [ 5 6] [ 9 10] [13 14]] Only second column of all rows [ 3 7 11 15] All columns from second column onwards [[ 3 4] [ 7 8] [11 12] [15 16]] -------------------------Array Attributes---------------------- Memory Address: <memory at 0x7fd113ace910> Shape: (4, 4) Data Type: int64 Strides: (32, 8) Number of Dimensions: 2 Number of elements: 16 Flags: C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : True ALIGNED : True WRITEBACKIFCOPY : False UPDATEIFCOPY : False Size of single array element:8 Total consumed bytes: 128

PIMA INDIAN DIABETES DATASET 


import  tensorflow as tf 
import numpy as np
from tensorflow import keras
import os 


pimadsurl = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"
filepath = "/content/pimads.csv"
if (os.path.exists(filepath) == False) : 
  tf.keras.utils.get_file(filepath , origin = pimadsurl )



# # first neural network with keras tutorial
# from numpy import loadtxt
# from keras.models import Sequential
# from keras.layers import Dense
# load the dataset
dataset = np.loadtxt(filepath, delimiter=',')
# split into input (X) and output (y) variables
X = dataset[:,0:8]
y = dataset[:,8]
# define the keras model
model = keras.models.Sequential()
model.add(keras.layers.Dense(12, input_dim=8, activation='relu'))
model.add(keras.layers.Dense(8, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
# compile the keras model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit the keras model on the dataset
model.fit(X, y, epochs=150, batch_size=10
          verbose=0 #verbose = 0 means do not show the progressbar
          )
# evaluate the keras model
_, accuracy = model.evaluate(X, y)
print('Accuracy: %.2f' % (accuracy*100))

# make class predictions with the model
predictions = model.predict_classes(X)
# summarize the first 5 cases
for i in range(5):
  print('%s => %d (expected %d)' % (X[i].tolist(), predictions[i], y[i]))


# /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/sequential.py:450: UserWarning: `model.predict_classes()` is deprecated and will be removed after 2021-01-01. Please use instead:* `np.argmax(model.predict(x), axis=-1)`,   if your model does multi-class classification   (e.g. if it uses a `softmax` last-layer activation).* `(model.predict(x) > 0.5).astype("int32")`,   if your model does binary classification   (e.g. if it uses a `sigmoid` last-layer activation).
#   warnings.warn('`model.predict_classes()` is deprecated and '

 AttributeError: 'Model' object has no attribute 'predict_classes'


Models in keras, both Sequential and Functional, provide following predictions
functions: 
  • predict()
  • predict_on_batches()
  • predict_step()
Apart from this, Sequential model provides two addtional methods: 
  • predict_classes()
  • predict_proba()
These functions directly predict the resultant class or class probability, 
without needing any conversion like numpy.argmax()
However, predict_classes() and predict_proba() are not supported by 
functional model and you will get following error if you try to implement it on
functional model: 
AttributeError: 'Model' object has no attribute 'predict_classes'

Thursday, May 13, 2021

 

Difference between keras Sequential and Functional API


Sr. No.Sequential APIFunctional API
1Syntactical : Declared using keras.Sequential([]) functionDeclared using keras.Model class 
2Syntactical : Addition of layers is by using .add interface or simply by using array like syntaxThere is no addition of layers, you simply declare the model's input and output layers. If there are more layers required in between, they have to be added to input layer using functional syntax
3Supports only linear graphs of layers (that means branching is not supported)Supports non-linear graphs of layers (that means layers can branch out and merge)
4Supports only a single input layerSupports multiple input layers
5Supports only a single output layerSupports multiple output layers
6Supports no sharing of layers (it is not needed also since layer can communicate only to one input and one output layer)Layers can be shared across other layers down the chain
7Simplistic and non-flexibleFlexible and supports more complex scenarios
8Easy to set-up and enough for most of the scenariosComparatively complex to set-up (if scenarios are complex)
Beneficial as also only option for complex scenarios
9Provides two prediction functons,
  • predict_classes
  • predict_proba. 
predict_classes outputs a numpy array of class predictions.
predict_proba outputs a numpy array of class probability predictions. 

Apart from these two it also supports the prediction functions
  • predict,
  • predict_on_batches
  • predict_step
Functional model does NOT provide the shortcuts predict_classes and predict_proba. 

It only provides the common prediction functions:
  • predict,
  • predict_on_batches
  • predict_step

Note 1: Both Sequential and Functional models earlier supported a method called predict_generator, to be used with generators. However, this method is now deprecated and the functionality is merged with predict function.
Note 2: predict_classes and predict_proba (which are supported by Sequential model only) only make sense in case of classification problems. 


RuntimeError: Intra op parallelism cannot be modified after initialization. 


Issue : (In google colab) the following error is flashed when trying to set number of threads.

RuntimeError: Intra op parallelism cannot be modified after initialization.


Following code is responsible for this error:


tf.config.threading.set_intra_op_parallelism_threads(1)
tf.config.threading.set_inter_op_parallelism_threads(1)

It allows to change inter op parallelism, but not intra op parallelism.


Solution: Restart the runtime and the error will go.

The error will not be flashed if the above code is already there by the time you start afresh.


Limitations of Sequential Model 

It is not straightforward to define models that may have multiple different input sources, produce multiple output destinations or models that re-use layers.

Functional model API allows you to define multiple input or output models as well as models that share layers. More than that, it allows you to define ad hoc acyclic network graphs.




KERAS FUNCTIONAL API 

from keras.models import Model

visible = Input(shape=(64,64,1))
conv1 = Conv2D(32, kernel_size=4, activation='relu')(visible)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(16, kernel_size=4, activation='relu')(pool1)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
flat = Flatten()(pool2)
hidden1 = Dense(10, activation='relu')(flat)
output = Dense(1, activation='sigmoid')(hidden1)

model = Model(inputs=visible, outputs=output)

 Laurence Moroney's first tutorial 



import  tensorflow as tf 
import numpy as np
from tensorflow import keras

xs = np.array([-101,2,3,4,5,6], dtype = float)
ys = np.array([-21 , 4 , 71013,1619], dtype = float)



model  = keras.models.Sequential([
          keras.layers.Dense(units = 1 , input_shape=[1])
])

model.compile(optimizer = "sgd" , loss = "mean_squared_error")

model.fit(xs, ys,epochs = 50)

print(model.predict([10]))

Trekhleb


https://github.com/trekhleb/learn-python

https://github.com/trekhleb/machine-learning-experiments

https://trekhleb.dev/machine-learning-experiments/#/

Practical deployment on web using tensorflow.js


Rock Papers and scissors: 

https://colab.research.google.com/github/trekhleb/machine-learning-experiments/blob/master/experiments/rock_paper_scissors_cnn/rock_paper_scissors_cnn.ipynb#scrollTo=_3APy_0-1LvQ




https://colab.research.google.com/github/trekhleb/machine-learning-experiments/blob/master/experiments/clothes_generation_dcgan/clothes_generation_dcgan.ipynb

https://colab.research.google.com/github/trekhleb/machine-learning-experiments/blob/master/experiments/digits_recognition_cnn/digits_recognition_cnn.ipynb

https://colab.research.google.com/github/trekhleb/machine-learning-experiments

13 experiments

How to check local and global angular versions

 Use the command ng version (or ng v ) to find the version of Angular CLI in the current folder. Run it outside of the Angular project, to f...