Tuesday, June 22, 2021

Gradient Tape Basic Tutorial (contd.)

Further to the post , tried two built in keras loss functions, mae and mse. While the results of simple loss and mse are comparable, mae produced wrong results.


import numpy as np 
import tensorflow as tf
import random 

x = np.array([-2-10 , 1,2,4,5,6], dtype=float)
y = 4* x - 5 

print(x)
print(y) 

#define weight and bias
w = tf.Variable(random.random(), trainable = True)
b = tf.Variable(random.random(), trainable = True)

#simple loss function

def mae_loss(y_groundtruthy_predicted) :
  loss = tf.keras.losses.MeanAbsoluteError(reduction=tf.keras.losses.Reduction.SUM)
  return loss(y_groundtruth, y_predicted)

def mse_loss(y_groundtruthy_predicted) :
  loss = tf.keras.losses.MeanSquaredError()
  return loss(y_groundtruth, y_predicted)

def simple_loss(y_groundtruthy_predicted) :
  return tf.abs(y_groundtruth -y_predicted )

#lr
lr = 0.001


def fit_function(x_groundtruth , y_groundtruth) : 
  with tf.GradientTape(persistent = Trueas tape : 
    y_predicted = w * x_groundtruth + b 

    #loss  =  simple_loss(y_groundtruth , y_predicted)    
    loss  =  mse_loss(y_groundtruth , y_predicted)    
    #loss  =  mae_loss(y_groundtruth , y_predicted)    

  w_gradient = tape.gradient(loss , w)
  b_gradient = tape.gradient(loss , b) 

  w.assign_sub(w_gradient * lr)
  b.assign_sub(b_gradient * lr)

for _ in range(4000) : 
    fit_function(x, y)

#w and b are tf.Variable objects, printing them directly causes the 
# objects to be printed in <object> syntax. hence call the numpy method   
print("Expected weight: 4; Predicted weight: {}".format(w.numpy()))
print("Expected bias : -5; Predicted bias : {}".format(b.numpy()))



Below are the results:
1. With Simple Loss :
Expected weight: 4; Predicted weight: 3.9901857376098633 Expected bias : -5; Predicted bias : -5.0017409324646

2. With MSE:
Expected weight: 4; Predicted weight: 3.994652032852173
Expected bias : -5; Predicted bias : -4.970853328704834

3. With MAE:
Expected weight: 4; Predicted weight: 3.326237440109253 Expected bias : -5; Predicted bias : -1.6331548690795898


Notes:
1. Results are with 4000 epochs as opposed to 2000 epochs earlier. No
significant improvement was noticed ( in case of simple loss and mse)
because results were already fairly accurate with 2000 epochs.
2. For MAE, some trial and errors were done
by changing default parameters but results did not improve.

No comments:

Post a Comment

 using Microsoft.AspNetCore.Mvc; using System.Xml.Linq; using System.Xml.XPath; //<table class="common-table medium js-table js-stre...