Tuesday, June 29, 2021

Basic Git Commands

 git status --s [short] 

git status --long 

git status     [same as git status --long]

git status --u [untracked files]


git add xx.txt  [adds a file to git, it starts getting tracked, but not commited]

git commit -m "add file xx.txt"  [-m provides custom message given by user to tag the commit]


git add . [add all the files to staging are - both modified and new]



git checkout xx.txt [overwrites xx.txt in working area with last staged version in staging area]


get reset HEAD xx.txt [overwrites xx.txt in staging area from commited area- last commited version]


list all branches : 

git branch -a


git pull = git fetch + git merge FETCH_HEAD


Checkout a particular version of a file: 

git log filename.ext

git checkout XXXX filename.ext   [XXXX-> first four digit of commit hash key]



useful git cheatsheet: 

http://ndpsoftware.com/git-cheatsheet.html#loc=remote_repo;


https://marklodato.github.io/visual-git-guide/index-en.html

http://git-scm.com/book/en/v2/Git-Basics-Undoing-Things


Sunday, June 27, 2021

Tensorflow Decision Forests

 #Introducing TensorFlow Decision Forests

#https://blog.tensorflow.org/2021/05/introducing-tensorflow-decision-forests.html


#next step : https://www.tensorflow.org/decision_forests/tutorials
################################################################################


#!pip install tensorflow_decision_forests 
#!wget "https://storage.googleapis.com/download.tensorflow.org/data/palmer_penguins/penguins.csv"

# Load TensorFlow Decision Forests
import tensorflow_decision_forests as tfdf

# Load the training dataset using pandas
import pandas
df  = pandas.read_csv("penguins.csv")

from sklearn.model_selection import train_test_split 
train_df , test_df = train_test_split(df, test_size = 0.2)

# Convert the pandas dataframe into a TensorFlow dataset
train_ds = tfdf.keras.pd_dataframe_to_tf_dataset(train_df, label="species")

# Train the model
model = tfdf.keras.RandomForestModel()
model.fit(train_ds)


# Load the testing dataset
#test_df = pandas.read_csv("penguins_test.csv")

# Convert it to a TensorFlow dataset
test_ds = tfdf.keras.pd_dataframe_to_tf_dataset(test_df, label="species")

# Evaluate the model
model.compile(metrics=["accuracy"])
print(model.evaluate(test_ds))
# >> 0.979311
# Note: Cross-validation would be more suited on this small dataset.
# See also the "Out-of-bag evaluation" below.

# Export the model to a TensorFlow SavedModel
model.save("project/my_first_model")


tfdf.model_plotter.plot_model_in_colab(model, tree_idx=0)


model.summary()



model.make_inspector().variable_importances()
#["MEAN_DECREASE_IN_ACCURACY"]
#MEAN_DECREASE_IN_ACCURACY

# List all the other available learning algorithms
tfdf.keras.get_all_models()

# Display the hyper-parameters of the Gradient Boosted Trees model 
#the following will open a new help window describing GradientBoostedTreesModel
#? tfdf.keras.GradientBoostedTreesModel

# Create another model with specified hyper-parameters
model = tfdf.keras.GradientBoostedTreesModel(
    num_trees=500,
    growing_strategy="BEST_FIRST_GLOBAL",
    max_depth=8,
    split_axis="SPARSE_OBLIQUE",
    )

# Evaluate the model
model.compile(metrics=["accuracy"])
print(model.evaluate(test_ds))

Zablo

 https://zablo.net/

Friday, June 25, 2021

 Helsinki

https://www.helsinki.fi/en/admissions-and-education/open-university/open-online-courses-or-moocs

https://www.mooc.fi/fi#courses

https://devopswithkubernetes.com/

https://fullstackopen.com/en/

https://devopswithdocker.com/

https://www.elementsofai.com/

https://course.elementsofai.com/

https://buildingai.elementsofai.com/

https://cybersecuritybase.mooc.fi/

https://haskell.mooc.fi/  (functional programming) 


AttributeError: module 'tensorflow' has no attribute 'app'

 I was following this article, which uses tf.app in its second point. When I ran the code, I got : 

AttributeError: module 'tensorflow' has no attribute 'app'


The reason for this error is that Tensorflow 2.x no more supports tf.app.module. 

But hold on, there are alternatives! The alternatives are gflags ( for FLAGS) and  google.apputils.app (for tf.app).

To use gflags, you first need to pip install gflags:

pip install python-gflags


Now, you can use them in a similar way to tf.app.FLAG. There are slight name changes in parameter names:

flag_name becomes name, default_value becomes default & docstring becomes help.


Old code with tf.app.FLAGS

import sys
import tensorflow as tf

flags = tf.app.flags
flags.DEFINE_string(flag_name='color',
                    default_value='green',
                    docstring='the color to make a flower')

def main():
    flags.FLAGS._parse_flags(args=sys.argv[1:])
    print('a {} flower'.format(flags.FLAGS.color))

if __name__ == '__main__':
    main()





New code with gflags :

import sys
import gflags

gflags.DEFINE_string(name='color',
                      default='green',
                      help='the color to make a flower')

def main():
     gflags.FLAGS(sys.argv)
     print('a {} flower'.format(gflags.FLAGS.color))

    if __name__ == '__main__':
        main()


If you want to use app also , like tf.app , you can use
google.apputils.app


Or alternatively you can use tf.compat.v1 if you like, then the good old tf.app will work!
import tensorflow.compat.v1 as tf

Tensorflow Official Object Detection Tutorials

https://github.com/tensorflow/models/blob/master/research/object_detection/colab_tutorials/object_detection_tutorial.ipynb

https://www.tensorflow.org/hub/tutorials/object_detection

https://github.com/tensorflow/hub/blob/master/examples/colab/tf2_object_detection.ipynb




Currently getting two errors in the first google colab at the end ( in mask rcnn only, rest is working) 

#masking_model.output_shapes
#the above gives error
#AttributeError: 'AutoTrackable' object has no attribute 'output_shapes


for image_path in TEST_IMAGE_PATHS:
  show_inference(masking_model, image_path)
#show_inference gives error 
#TypeError: Cannot interpret 'tf.uint8' as a data type

Vit Busquet's 4 notebooks for OpenCV beginners

 


OpenCV fundamentals

https://colab.research.google.com/github/computationalcore/introduction-to-opencv/blob/master/notebooks/1-Fundamentals.ipynb



Image stats and image processing

https://colab.research.google.com/github/computationalcore/introduction-to-opencv/blob/master/notebooks/2-Image_stats_and_image_processing.ipynb



Features in computer vision

https://colab.research.google.com/github/computationalcore/introduction-to-opencv/blob/master/notebooks/3-Features.ipynb


Cascade Classification

https://colab.research.google.com/github/computationalcore/introduction-to-opencv/blob/master/notebooks/4-Cascade_classification.ipynb



Thursday, June 24, 2021

My Answers on SO

https://stackoverflow.com/questions/6072087/table-module-and-table-data-gateway-patterns

https://stackoverflow.com/questions/61969311/open-cv-dnn-error-for-python-while-using-yolov3-using-open-cv-ver4-2-0/68125433#68125433

https://stackoverflow.com/questions/55924673/how-to-print-confusion-matrix-for-image-classifier-cifar-10/67691696#67691696

https://stackoverflow.com/questions/56081975/output-dimension-of-reshape-layer/67641074#67641074


Resources for Object Detection Implementation from Scratch in Keras

Most of the time we use pre-trained models for object detection in tensorflow-keras. The web is full of such kind of tutorials. 

At the same time, there are certain implementations written from scratch. 

Below I plan to list such resources. 


1. RetinaNet : The official keras tutorial implements RetinaNet using ResNet50. 

https://keras.io/examples/vision/retinanet/

2. Mirza Mujtaba's YOLO implementation: 

https://www.kaggle.com/mirzamujtaba/yolo-object-detection-using-keras








Eric Dortmans - It is indispensable!

 This is undoubtedly one of the most indispensable resource for any ML beginner. 


https://colab.research.google.com/github/dortmans/ml_notebooks/blob/master/ML_Workshop.ipynb#scrollTo=FEuR8n3P9vaQ


Many thanks to Eric Dortmans for this!


There are no personal pages on the web of Eric. However, he looks like a senior person, working for a long time in technology field. 

Would like have more and more material from Eric, if possible.

Wednesday, June 23, 2021

Tensorflow Official Object Detection Tutorial


#Tensorflow official tutorial - beach image

#https://github.com/tensorflow/hub/blob/master/examples/colab/tf2_object_detection.ipynb




# Clone the tensorflow models repository

!git clone --depth 1 https://github.com/tensorflow/models


%%bash

sudo apt install -y protobuf-compiler

cd models/research/

protoc object_detection/protos/*.proto --python_out=.

cp object_detection/packages/tf2/setup.py .

python -m pip install .






import os

import pathlib


import matplotlib

import matplotlib.pyplot as plt


import io

import scipy.misc

import numpy as np

from six import BytesIO

from PIL import Image, ImageDraw, ImageFont

from six.moves.urllib.request import urlopen


import tensorflow as tf

import tensorflow_hub as hub


tf.get_logger().setLevel('ERROR')



# @title Run this!!


def load_image_into_numpy_array(path):

  """Load an image from file into a numpy array.


  Puts image into numpy array to feed into tensorflow graph.

  Note that by convention we put it into a numpy array with shape

  (height, width, channels), where channels=3 for RGB.


  Args:

    path: the file path to the image


  Returns:

    uint8 numpy array with shape (img_height, img_width, 3)

  """

  image = None

  if(path.startswith('http')):

    response = urlopen(path)

    image_data = response.read()

    image_data = BytesIO(image_data)

    image = Image.open(image_data)

  else:

    image_data = tf.io.gfile.GFile(path, 'rb').read()

    image = Image.open(BytesIO(image_data))


  (im_width, im_height) = image.size

  return np.array(image.getdata()).reshape(

      (1, im_height, im_width, 3)).astype(np.uint8)



ALL_MODELS = {

'CenterNet HourGlass104 512x512' : 


'https://tfhub.dev/tensorflow/centernet/hourglass_512x512/1',

'CenterNet HourGlass104 Keypoints 512x512' : 


'https://tfhub.dev/tensorflow/centernet/hourglass_512x512_kpts/1',

'CenterNet HourGlass104 1024x1024' : 


'https://tfhub.dev/tensorflow/centernet/hourglass_1024x1024/1',

'CenterNet HourGlass104 Keypoints 1024x1024' : 


'https://tfhub.dev/tensorflow/centernet/hourglass_1024x1024_kpts/1',

'CenterNet Resnet50 V1 FPN 512x512' : 


'https://tfhub.dev/tensorflow/centernet/resnet50v1_fpn_512x512/1',

'CenterNet Resnet50 V1 FPN Keypoints 512x512' : 


'https://tfhub.dev/tensorflow/centernet/resnet50v1_fpn_512x512_kpts/1',

'CenterNet Resnet101 V1 FPN 512x512' : 


'https://tfhub.dev/tensorflow/centernet/resnet101v1_fpn_512x512/1',

'CenterNet Resnet50 V2 512x512' : 


'https://tfhub.dev/tensorflow/centernet/resnet50v2_512x512/1',

'CenterNet Resnet50 V2 Keypoints 512x512' : 


'https://tfhub.dev/tensorflow/centernet/resnet50v2_512x512_kpts/1',

'EfficientDet D0 512x512' : 'https://tfhub.dev/tensorflow/efficientdet/d0/1',

'EfficientDet D1 640x640' : 'https://tfhub.dev/tensorflow/efficientdet/d1/1',

'EfficientDet D2 768x768' : 'https://tfhub.dev/tensorflow/efficientdet/d2/1',

'EfficientDet D3 896x896' : 'https://tfhub.dev/tensorflow/efficientdet/d3/1',

'EfficientDet D4 1024x1024' : 'https://tfhub.dev/tensorflow/efficientdet/d4/1',

'EfficientDet D5 1280x1280' : 'https://tfhub.dev/tensorflow/efficientdet/d5/1',

'EfficientDet D6 1280x1280' : 'https://tfhub.dev/tensorflow/efficientdet/d6/1',

'EfficientDet D7 1536x1536' : 'https://tfhub.dev/tensorflow/efficientdet/d7/1',

'SSD MobileNet v2 320x320' : 'https://tfhub.dev/tensorflow/ssd_mobilenet_v2/2',

'SSD MobileNet V1 FPN 640x640' : 


'https://tfhub.dev/tensorflow/ssd_mobilenet_v1/fpn_640x640/1',

'SSD MobileNet V2 FPNLite 320x320' : 


'https://tfhub.dev/tensorflow/ssd_mobilenet_v2/fpnlite_320x320/1',

'SSD MobileNet V2 FPNLite 640x640' : 


'https://tfhub.dev/tensorflow/ssd_mobilenet_v2/fpnlite_640x640/1',

'SSD ResNet50 V1 FPN 640x640 (RetinaNet50)' : 


'https://tfhub.dev/tensorflow/retinanet/resnet50_v1_fpn_640x640/1',

'SSD ResNet50 V1 FPN 1024x1024 (RetinaNet50)' : 


'https://tfhub.dev/tensorflow/retinanet/resnet50_v1_fpn_1024x1024/1',

'SSD ResNet101 V1 FPN 640x640 (RetinaNet101)' : 


'https://tfhub.dev/tensorflow/retinanet/resnet101_v1_fpn_640x640/1',

'SSD ResNet101 V1 FPN 1024x1024 (RetinaNet101)' : 


'https://tfhub.dev/tensorflow/retinanet/resnet101_v1_fpn_1024x1024/1',

'SSD ResNet152 V1 FPN 640x640 (RetinaNet152)' : 


'https://tfhub.dev/tensorflow/retinanet/resnet152_v1_fpn_640x640/1',

'SSD ResNet152 V1 FPN 1024x1024 (RetinaNet152)' : 


'https://tfhub.dev/tensorflow/retinanet/resnet152_v1_fpn_1024x1024/1',

'Faster R-CNN ResNet50 V1 640x640' : 


'https://tfhub.dev/tensorflow/faster_rcnn/resnet50_v1_640x640/1',

'Faster R-CNN ResNet50 V1 1024x1024' : 


'https://tfhub.dev/tensorflow/faster_rcnn/resnet50_v1_1024x1024/1',

'Faster R-CNN ResNet50 V1 800x1333' : 


'https://tfhub.dev/tensorflow/faster_rcnn/resnet50_v1_800x1333/1',

'Faster R-CNN ResNet101 V1 640x640' : 


'https://tfhub.dev/tensorflow/faster_rcnn/resnet101_v1_640x640/1',

'Faster R-CNN ResNet101 V1 1024x1024' : 


'https://tfhub.dev/tensorflow/faster_rcnn/resnet101_v1_1024x1024/1',

'Faster R-CNN ResNet101 V1 800x1333' : 


'https://tfhub.dev/tensorflow/faster_rcnn/resnet101_v1_800x1333/1',

'Faster R-CNN ResNet152 V1 640x640' : 


'https://tfhub.dev/tensorflow/faster_rcnn/resnet152_v1_640x640/1',

'Faster R-CNN ResNet152 V1 1024x1024' : 


'https://tfhub.dev/tensorflow/faster_rcnn/resnet152_v1_1024x1024/1',

'Faster R-CNN ResNet152 V1 800x1333' : 


'https://tfhub.dev/tensorflow/faster_rcnn/resnet152_v1_800x1333/1',

'Faster R-CNN Inception ResNet V2 640x640' : 


'https://tfhub.dev/tensorflow/faster_rcnn/inception_resnet_v2_640x640/1',

'Faster R-CNN Inception ResNet V2 1024x1024' : 


'https://tfhub.dev/tensorflow/faster_rcnn/inception_resnet_v2_1024x1024/1',

'Mask R-CNN Inception ResNet V2 1024x1024' : 


'https://tfhub.dev/tensorflow/mask_rcnn/inception_resnet_v2_1024x1024/1'

}


IMAGES_FOR_TEST = {

  'Beach' : 'models/research/object_detection/test_images/image2.jpg',

  'Dogs' : 'models/research/object_detection/test_images/image1.jpg',

  # By Heiko Gorski, Source: https://commons.wikimedia.org/wiki/File:Naxos_Taverna.jpg

  'Naxos Taverna' : 


'https://upload.wikimedia.org/wikipedia/commons/6/60/Naxos_Taverna.jpg',

  # Source: https://commons.wikimedia.org/wiki/File:The_Coleoptera_of_the_British_islands_


(Plate_125)_(8592917784).jpg

  'Beatles' : 


'https://upload.wikimedia.org/wikipedia/commons/1/1b/The_Coleoptera_of_the_British_islands_


%28Plate_125%29_%288592917784%29.jpg',

  # By Américo Toledano, Source: https://commons.wikimedia.org/wiki/File:Biblioteca_Maim


%C3%B3nides,_Campus_Universitario_de_Rabanales_007.jpg

  'Phones' : 'https://upload.wikimedia.org/wikipedia/commons/thumb/0/0d/Biblioteca_Maim


%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg/1024px-Biblioteca_Maim%C3%B3nides


%2C_Campus_Universitario_de_Rabanales_007.jpg',

  # Source: https://commons.wikimedia.org/wiki/File:The_smaller_British_birds_


(8053836633).jpg

  'Birds' : 


'https://upload.wikimedia.org/wikipedia/commons/0/09/The_smaller_British_birds_


%288053836633%29.jpg',

}


COCO17_HUMAN_POSE_KEYPOINTS = [(0, 1),

 (0, 2),

 (1, 3),

 (2, 4),

 (0, 5),

 (0, 6),

 (5, 7),

 (7, 9),

 (6, 8),

 (8, 10),

 (5, 6),

 (5, 11),

 (6, 12),

 (11, 12),

 (11, 13),

 (13, 15),

 (12, 14),

 (14, 16)]



from object_detection.utils import label_map_util

from object_detection.utils import visualization_utils as viz_utils

from object_detection.utils import ops as utils_ops


%matplotlib inline



PATH_TO_LABELS = './models/research/object_detection/data/mscoco_label_map.pbtxt'

category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, 


use_display_name=True)



#@title Model Selection { display-mode: "form", run: "auto" }

model_display_name = 'CenterNet HourGlass104 Keypoints 512x512' # @param ['CenterNet 


HourGlass104 512x512','CenterNet HourGlass104 Keypoints 512x512','CenterNet HourGlass104 


1024x1024','CenterNet HourGlass104 Keypoints 1024x1024','CenterNet Resnet50 V1 FPN 


512x512','CenterNet Resnet50 V1 FPN Keypoints 512x512','CenterNet Resnet101 V1 FPN 


512x512','CenterNet Resnet50 V2 512x512','CenterNet Resnet50 V2 Keypoints 


512x512','EfficientDet D0 512x512','EfficientDet D1 640x640','EfficientDet D2 


768x768','EfficientDet D3 896x896','EfficientDet D4 1024x1024','EfficientDet D5 


1280x1280','EfficientDet D6 1280x1280','EfficientDet D7 1536x1536','SSD MobileNet v2 


320x320','SSD MobileNet V1 FPN 640x640','SSD MobileNet V2 FPNLite 320x320','SSD MobileNet 


V2 FPNLite 640x640','SSD ResNet50 V1 FPN 640x640 (RetinaNet50)','SSD ResNet50 V1 FPN 


1024x1024 (RetinaNet50)','SSD ResNet101 V1 FPN 640x640 (RetinaNet101)','SSD ResNet101 V1 


FPN 1024x1024 (RetinaNet101)','SSD ResNet152 V1 FPN 640x640 (RetinaNet152)','SSD ResNet152 


V1 FPN 1024x1024 (RetinaNet152)','Faster R-CNN ResNet50 V1 640x640','Faster R-CNN ResNet50 


V1 1024x1024','Faster R-CNN ResNet50 V1 800x1333','Faster R-CNN ResNet101 V1 


640x640','Faster R-CNN ResNet101 V1 1024x1024','Faster R-CNN ResNet101 V1 800x1333','Faster 


R-CNN ResNet152 V1 640x640','Faster R-CNN ResNet152 V1 1024x1024','Faster R-CNN ResNet152 


V1 800x1333','Faster R-CNN Inception ResNet V2 640x640','Faster R-CNN Inception ResNet V2 


1024x1024','Mask R-CNN Inception ResNet V2 1024x1024']

model_handle = ALL_MODELS[model_display_name]


print('Selected model:'+ model_display_name)

print('Model Handle at TensorFlow Hub: {}'.format(model_handle))



print('loading model...')

hub_model = hub.load(model_handle)

print('model loaded!')



#@title Image Selection (don't forget to execute the cell!) { display-mode: "form"}

selected_image = 'Beach' # @param ['Beach', 'Dogs', 'Naxos Taverna', 'Beatles', 'Phones', 


'Birds']

flip_image_horizontally = False #@param {type:"boolean"}

convert_image_to_grayscale = False #@param {type:"boolean"}


image_path = IMAGES_FOR_TEST[selected_image]

image_np = load_image_into_numpy_array(image_path)


# Flip horizontally

if(flip_image_horizontally):

  image_np[0] = np.fliplr(image_np[0]).copy()


# Convert image to grayscale

if(convert_image_to_grayscale):

  image_np[0] = np.tile(

    np.mean(image_np[0], 2, keepdims=True), (1, 1, 3)).astype(np.uint8)


plt.figure(figsize=(24,32))

plt.imshow(image_np[0])

plt.show()




# running inference

results = hub_model(image_np)


# different object detection models have additional results

# all of them are explained in the documentation

result = {key:value.numpy() for key,value in results.items()}

print(result.keys())



label_id_offset = 0

image_np_with_detections = image_np.copy()


# Use keypoints if available in detections

keypoints, keypoint_scores = None, None

if 'detection_keypoints' in result:

  keypoints = result['detection_keypoints'][0]

  keypoint_scores = result['detection_keypoint_scores'][0]


viz_utils.visualize_boxes_and_labels_on_image_array(

      image_np_with_detections[0],

      result['detection_boxes'][0],

      (result['detection_classes'][0] + label_id_offset).astype(int),

      result['detection_scores'][0],

      category_index,

      use_normalized_coordinates=True,

      max_boxes_to_draw=200,

      min_score_thresh=.30,

      agnostic_mode=False,

      keypoints=keypoints,

      keypoint_scores=keypoint_scores,

      keypoint_edges=COCO17_HUMAN_POSE_KEYPOINTS)


plt.figure(figsize=(24,32))

plt.imshow(image_np_with_detections[0])

plt.show()



# Handle models with masks:

image_np_with_mask = image_np.copy()


if 'detection_masks' in result:

  # we need to convert np.arrays to tensors

  detection_masks = tf.convert_to_tensor(result['detection_masks'][0])

  detection_boxes = tf.convert_to_tensor(result['detection_boxes'][0])


  # Reframe the the bbox mask to the image size.

  detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(

            detection_masks, detection_boxes,

              image_np.shape[1], image_np.shape[2])

  detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,

                                      tf.uint8)

  result['detection_masks_reframed'] = detection_masks_reframed.numpy()


viz_utils.visualize_boxes_and_labels_on_image_array(

      image_np_with_mask[0],

      result['detection_boxes'][0],

      (result['detection_classes'][0] + label_id_offset).astype(int),

      result['detection_scores'][0],

      category_index,

      use_normalized_coordinates=True,

      max_boxes_to_draw=200,

      min_score_thresh=.30,

      agnostic_mode=False,

      instance_masks=result.get('detection_masks_reframed', None),

      line_thickness=8)


plt.figure(figsize=(24,32))

plt.imshow(image_np_with_mask[0])

plt.show()


x

Object Detection in Tensorflow Resources

 https://gilberttanner.com/blog/object-detection-with-tensorflow-2

Tensorflow Object Detection with Tensorflow 2

by Gilbert Tanner on Jul 13, 2020 · 8 min read

https://tfhub.dev/tensorflow/retinanet/resnet152_v1_fpn_1024x1024/1

TF HUB

https://blog.tensorflow.org/2020/07/tensorflow-2-meets-object-detection-api.html









IOU Calculations

 1. check https://www.programcreek.com/python/?CodeExample=get+iou

Example 51 : properly explains the concept, only include  a small number in denominator 1e-8 to avoid divide by zero 

iou = intersection / (area1 + area2 - intersection + 1e-8)

Example 54 : provides a good implementation using tf

2. lu and rd refer to left up and right down points. 
    lu = tf.maximum(boxes1_corners[:, None, :2], boxes2_corners[:, :2])
    rd = tf.minimum(boxes1_corners[:, None, 2:], boxes2_corners[:, 2:])
    intersection = tf.maximum(0.0, rd - lu)


3. The following is an implementation from https://github.com/srihari-humbarwadi/YOLOv1-TensorFlow2.0/blob/master/yolo_v1.py

def compute_iou(boxes1, boxes2): boxes1_t = tf.stack([boxes1[..., 0] - boxes1[..., 2] / 2.0, boxes1[..., 1] - boxes1[..., 3] / 2.0, boxes1[..., 0] + boxes1[..., 2] / 2.0, boxes1[..., 1] + boxes1[..., 3] / 2.0], axis=-1) boxes2_t = tf.stack([boxes2[..., 0] - boxes2[..., 2] / 2.0, boxes2[..., 1] - boxes2[..., 3] / 2.0, boxes2[..., 0] + boxes2[..., 2] / 2.0, boxes2[..., 1] + boxes2[..., 3] / 2.0], axis=-1) lu = tf.maximum(boxes1_t[..., :2], boxes2_t[..., :2]) rd = tf.minimum(boxes1_t[..., 2:], boxes2_t[..., 2:]) intersection = tf.maximum(0.0, rd - lu) inter_square = intersection[..., 0] * intersection[..., 1] square1 = boxes1[..., 2] * boxes1[..., 3] square2 = boxes2[..., 2] * boxes2[..., 3] union_square = tf.maximum(square1 + square2 - inter_square, 1e-10) return tf.clip_by_value(inter_square / union_square, 0.0, 1.0)

Tuesday, June 22, 2021

Gradient Tape Basic Tutorial (contd.)

Further to the post , tried two built in keras loss functions, mae and mse. While the results of simple loss and mse are comparable, mae produced wrong results.


import numpy as np 
import tensorflow as tf
import random 

x = np.array([-2-10 , 1,2,4,5,6], dtype=float)
y = 4* x - 5 

print(x)
print(y) 

#define weight and bias
w = tf.Variable(random.random(), trainable = True)
b = tf.Variable(random.random(), trainable = True)

#simple loss function

def mae_loss(y_groundtruthy_predicted) :
  loss = tf.keras.losses.MeanAbsoluteError(reduction=tf.keras.losses.Reduction.SUM)
  return loss(y_groundtruth, y_predicted)

def mse_loss(y_groundtruthy_predicted) :
  loss = tf.keras.losses.MeanSquaredError()
  return loss(y_groundtruth, y_predicted)

def simple_loss(y_groundtruthy_predicted) :
  return tf.abs(y_groundtruth -y_predicted )

#lr
lr = 0.001


def fit_function(x_groundtruth , y_groundtruth) : 
  with tf.GradientTape(persistent = Trueas tape : 
    y_predicted = w * x_groundtruth + b 

    #loss  =  simple_loss(y_groundtruth , y_predicted)    
    loss  =  mse_loss(y_groundtruth , y_predicted)    
    #loss  =  mae_loss(y_groundtruth , y_predicted)    

  w_gradient = tape.gradient(loss , w)
  b_gradient = tape.gradient(loss , b) 

  w.assign_sub(w_gradient * lr)
  b.assign_sub(b_gradient * lr)

for _ in range(4000) : 
    fit_function(x, y)

#w and b are tf.Variable objects, printing them directly causes the 
# objects to be printed in <object> syntax. hence call the numpy method   
print("Expected weight: 4; Predicted weight: {}".format(w.numpy()))
print("Expected bias : -5; Predicted bias : {}".format(b.numpy()))



Below are the results:
1. With Simple Loss :
Expected weight: 4; Predicted weight: 3.9901857376098633 Expected bias : -5; Predicted bias : -5.0017409324646

2. With MSE:
Expected weight: 4; Predicted weight: 3.994652032852173
Expected bias : -5; Predicted bias : -4.970853328704834

3. With MAE:
Expected weight: 4; Predicted weight: 3.326237440109253 Expected bias : -5; Predicted bias : -1.6331548690795898


Notes:
1. Results are with 4000 epochs as opposed to 2000 epochs earlier. No
significant improvement was noticed ( in case of simple loss and mse)
because results were already fairly accurate with 2000 epochs.
2. For MAE, some trial and errors were done
by changing default parameters but results did not improve.

 Zerowithdots

https://zerowithdot.com/colab-workspace/

Interacting with custom libraries in Google Colaboratory



https://zerowithdot.com/django-celery-p2/

Django and Celery - demo application, part II: expanding.

    https://zerowithdot.com/colab-github-workflow/

Colaboratory + Drive + Github -> the workflow made simpler

Real or Fake News Classification Kaggle challenge

#download True.csv and Fake.csv from 

#https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset

#to your google drive. The following code connects to gdrive using file ids.




#!pip install Pydrive

#!pip install sklearn


from pydrive.auth import GoogleAuth

from pydrive.drive import GoogleDrive

from google.colab import auth 

from oauth2client.client import GoogleCredentials


auth.authenticate_user()

gauth = GoogleAuth()

gauth.credentials = GoogleCredentials.get_application_default()

drive = GoogleDrive(gauth)

#Authentication happens only once and 

# a file adc.json is created once authentication is done.


import os 

if not os.path.exists("Fake.csv") : 

  fakefile_id = "1nBBfiZOoZToCaGsLxU3s_FRcFcg0Swkk"   #ID OF YOUR GDRIVE FILE.

  downloaded = drive.CreateFile({"id": fakefile_id})

  downloaded.GetContentFile("Fake.csv")


if not os.path.exists("True.csv") : 

  truefileid = "1Z_SJxYF-43MUBj3-jmOk_xUS-6znKN_0"

  downloaded = drive.CreateFile({"id": truefileid})

  downloaded.GetContentFile("True.csv")


import pandas as pd


df_true_news = pd.read_csv("True.csv")

df_fake_news = pd.read_csv("Fake.csv")



print (df_true_news.head(20))

print(df_fake_news.head(20))


print(df_true_news.count())

print(df_fake_news.count())



def find_missing_vals(data) : 

  total = len(data)

  for column in data.columns: 

    if data[column].isna().sum() != 0 : 

      print("{} has {:,}  ({:.2%}) missing values.".format(column, data[column].isna().sum


() , (data[column].isna().sum() /total) * 100))

    else : 

      print(" {} has no missing values".format(column))


  print ("\nMissing Value Summary\n{}".format("-"*35))

  print("\ndf_db\n{}".format("-"*15))

  print(data.isnull().sum(axis=0))


def remove_duplicates(data)   : 

  print("\nCleaning Summary\n{}".format("-"*35))

  size_before = len(data)

  data.drop_duplicates(subset = None , keep = "first" , inplace = True)

  size_after = len(data)

  print("...removed {} duplicate rows in db data".format(size_before - size_after))




  

find_missing_vals(df_fake_news)

remove_duplicates(df_fake_news)

find_missing_vals(df_true_news)

remove_duplicates(df_true_news)


df_merged = pd.merge(df_fake_news , df_true_news, how = "outer")


import seaborn as sns

import matplotlib.pyplot as plt

sns.set(style="ticks", color_codes = True)


fig_dims = (20, 4.8) 

fig, ax = plt.subplots(figsize = fig_dims)

sns.countplot(df_merged['subject'], ax = ax , data = df_merged)



df_fake_news["label"] = 0 

df_true_news["label"] = 1 


df_train = pd.merge(df_fake_news, df_true_news , how = "outer")




from sklearn.feature_extraction.text import CountVectorizer

from sklearn.feature_extraction.text import TfidfTransformer



import nltk 

nltk.download("stopwords")

from nltk.corpus import stopwords

import string


def text_process(text) : 

  no_punctuation = [char for char in text if char not in string.punctuation]

  no_punctuation = "".join(no_punctuation)

  return [word for word in no_punctuation.split() if word.lower() not in stopwords.words


("english")]



from sklearn.model_selection import train_test_split

xtrain, xtest , ytrain, ytest = train_test_split(df_train["title"] , df_train["label"], 


test_size = 0.2 , random_state = 42)


from sklearn.neural_network import MLPClassifier 


from sklearn.pipeline import Pipeline


news_classifier = Pipeline([

  ("vectorizer", CountVectorizer(analyzer=text_process)), 

  ("tfidf" , TfidfTransformer()), 

  ("classifier", MLPClassifier(solver="adam", activation="tanh", random_state =1 , max_iter 


= 200, early_stopping = True))

  ])


news_classifier.fit(xtrain,ytrain)



predicted = news_classifier.predict(xtest)


from sklearn.metrics import classification_report

print(classification_report(predicted , ytest))


from sklearn.externals import joblib 

joblib.dump(news_classifier , "model.pkl")


from googleapiclient.discovery import build 

drive_service = build("drive" , "v3")


from googleapiclient.http import MediaFileUpload 


file_metadata = { 

    "name" : "model.pkl",

    "mimeType" : "text/plain", 

  }


media = MediaFileUpload("model.pkl" , mimetype="text/plain" , resumable = True)


created = drive_service.files().create(body=file_metadata , media_body = 


media,fields="id").execute()


print("File ID: {} ".format(created.get("id")))


Monday, June 21, 2021

Model performance metrics

 Precision, Recall , F1 Score

Downloading a file from Google Drive using PyDrive

 #!pip install Pydrive


from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth 
from oauth2client.client import GoogleCredentials

auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
#Authentication happens only once and 
# a file adc.json is created once authentication is done.

file_id = "1nBBfiZOoZToCaGsLxU3s_FRcFcg0Swkk"   #ID OF YOUR GDRIVE FILE.
#THE ID IS PART BETWEEN ..../d/   AND /view...............
# https://drive.google.com/file/d/1nBBfiZOoZToCaGsLxU3s_FRcFcg0Swkk/view?usp=sharing
downloaded = drive.CreateFile({"id": file_id})
downloaded.GetContentFile("MyFile.csv")

 https://www.mooc.fi/fi#courses

Resources

Jake VanderPlas

Python Data Science 
Handbook
https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/Index.ipynb#scrollTo=HKf6PY5IvAon
University of Helsinki - Numpyhttps://colab.research.google.com/github/csmastersUH/data_analysis_with_python_summer_2021/blob/master/numpy.ipynb
University of Helsinki - Matplotlibhttps://colab.research.google.com/github/csmastersUH/data_analysis_with_python_summer_2021/blob/master/matplotlib.ipynb
University of Helsinki - Pandashttps://colab.research.google.com/github/csmastersUH/data_analysis_with_python_summer_2021/blob/master/pandas3.ipynb
https://csmastersuh.github.io/data_analysis_with_python_summer_2021/pandas3.html
DIVE INTO DEEP LEARNING
d2l.ai
Interactive book, adapted by 175 universities from 40 countires
DIVE INTO DEEP LEARNING
computer-vision









Sunday, June 20, 2021

 Nicholas Renotte

Tensorflow Object Detection in 5 hours with Python | Full Course with 3 Projects


http://github.com/nicknochnack

Friday, June 18, 2021

This strange google colab forms problem is not yet solved

 https://stackoverflow.com/questions/64306088/google-colab-how-to-loop-through-data-filled-in-colab-forms-input 

This person has posted the SO problem Oct 11, 2020 but is not yet solved. 


The code tries to capture input in an array (the name and ages of persons in a persons array). It seems that the name and age captured for the first time is never cleared at all, (even though you can see it changing in the colab), and at the end, for n number of times, the first name and age is printed.

there_is_more_people = True
people = []

class Person:
  def __init__(self, name, age):
    self.name = name
    self.age = age


def register_people(there_is_more_people):
  while there_is_more_people:
    did_you_finish = input("did you finish filling the fields? y/n")[0].lower()
    #@markdown ## Fill your data here:
    name = "mary" #@param {type:"string"}
    age =  21#@param {type:"integer"}
    new_person = Person(name, age)
    people.append(new_person)
    if did_you_finish == 'y':
      people.append(new_person)
      input_there_is_more_people = input("there is more people? y/n")[0].lower()
      if input_there_is_more_people == 'y':
        there_is_more_people = True
        name = None
        age = None
      else: 
        there_is_more_people = False
  for i in people:
    print("These are the people you registered:")
    print(i.name)

register_people(there_is_more_people)

Wednesday, June 16, 2021

Thursday, June 10, 2021

Deep Dream

1. Below is the code of the Deep Dream tutorial based on this.

2. It builds further upon the minimal code posted earlier

3. The new code starts with "first octave trial" 




import tensorflow as tf 

from tensorflow import keras 
import numpy as np 
import PIL 
import IPython.display as display 

url = "https://storage.googleapis.com/download.tensorflow.org/example_images" +\
      "/YellowLabradorLooking_new.jpg"

def download(url  , max_dim = None) : 
  name = url.split("/")[-1]
  image_file = tf.keras.utils.get_file(name , origin = url )
  image = PIL.Image.open(image_file)
  if max_dim : 
    image.thumbnail((max_dim , max_dim))
  return  np.array(image) 

def deprocess(image) : 
  image = 255 * (image  + 1.0 )   /2.0 
  image = tf.cast(image , tf.uint8)
  return image

def show(image)   : 
  display.display(PIL.Image.fromarray(np.array(image)))


original_image = download(url , max_dim = 500
show(original_image) 
display.display(display.HTML("image"))

base_model = tf.keras.applications.InceptionV3(
    include_top = False , 
    weights = "imagenet"
    )

names = ["mixed3""mixed5"]
layers = [base_model.get_layer(name).output for name in names]

deepdreambasemodel = tf.keras.Model(inputs = base_model.inputs,outputs = layers)

def loss_func(img , model) : 
    img_batch = tf.expand_dims(img , axis = 0 ) 
    activations = model(img_batch)
    if len(activations) == 1 : 
      activations = [activations]

    losses = [] 
    for act in activations : 
      loss = tf.reduce_mean(act) 
      losses.append(loss)

    return tf.reduce_sum(losses)

class DeepDream(tf.Module ) : 
  def __init__ (self , model) : 
    self.model = model 

  @tf.function(
      input_signature = (
          tf.TensorSpec(shape=[None , None , 3], dtype = tf.float32),
          tf.TensorSpec(shape=[], dtype = tf.int32),
          tf.TensorSpec(shape=[], dtype = tf.float32)
      ) 
  )
  def __call__(self , img , steps = 100  , step_size = 0.01) : 
    loss = tf.constant(0.0

    for n in range(steps) : 
      with tf.GradientTape() as tape : 
        tape.watch(img) 
        loss = loss_func(img , self.model)

      gradient = tape.gradient(loss , img) 
      gradient  /= tf.math.reduce_std(gradient) + 1e-8

      img = img + gradient * step_size
      img = tf.clip_by_value(img , -1 , 1)

    return loss , img


deepdreammodel = DeepDream(deepdreambasemodel)    

def run_deep_dream_simple(img , steps , step_size) : 
  img = tf.keras.applications.inception_v3.preprocess_input(img)
  img = tf.convert_to_tensor(img) 
  step_size= tf.convert_to_tensor(step_size) 
  
  steps_remaining = steps 
  step = 0 
  while steps_remaining : 
    if steps_remaining > 100 : 
        run_step = tf.constant(100
    else :
        run_step = tf.constant(steps_remaining)

    steps_remaining -= run_step 
    step += run_step

    loss , img  = deepdreammodel(img , run_step , tf.constant(step_size))

    show(deprocess(img))

  show(deprocess(img))    
  return(deprocess(img))

run_deep_dream_simple(img = original_image , steps = 100 , step_size= 0.01)  


###########################FIRST OCTAVE  TRIAL##################################
import time 
starttime = time.time() 
img = tf.constant ( np.array(original_image))
base_shape = tf.shape(img)[:-1]
float_shape = tf.cast(base_shape , tf.float32) 

OCTAVE_SCALE = 1.30 

for n in range(-2,3) : 
    new_shape = tf.cast( float_shape * (OCTAVE_SCALE ** n), tf.int32)
    img = tf.image.resize(img , new_shape).numpy()

    img = run_deep_dream_simple(img , steps = 100 , step_size = 0.01)


img = tf.image.resize(img , base_shape)
img = tf.image.convert_image_dtype(img /255 , dtype = tf.uint8) 
show(img)     
print ("time taken {} ".format(time.time() - starttime))
################################################################################



##NOW DEEP DREAM WITH OCTAVES 
#THIS ADDS ROLL FUNCTION.
#ALSO NOTICE THAT THE CLASS IS NOW LIMITED ONLY TO CALCULATE
#GRADIENTS, NEW IMAGE FORMATION USING GRADIENTS IS DONE IN run FUNCTION
def random_roll(img , maxroll):
#NOTICE minval = -maxroll , 
#NOT minval = maxroll, IT WILL GIVE YOU ERROR minval < maxval
  shift = tf.random.uniform(
                            shape=[2] , 
                            minval = -maxroll , 
                            maxval = maxroll, 
                            dtype = tf.int32
                            )
  rolled_image = tf.roll(img , shift = shift , axis = [0,1])
  return shift, rolled_image

shift, rolled_image = random_roll(np.array(original_image) , maxroll = 512)
show(rolled_image)

class TiledGradient(tf.Module) : 
  def __init__ ( self , model) : 
    self.model = model 

  @tf.function(
      input_signature = (
          tf.TensorSpec(shape=[None,None , 3] , dtype = tf.float32),
          tf.TensorSpec(shape=[] , dtype = tf.int32)
      ) 
  )
  def __call__(self , img , tile_size = 512) : 
    shift , rolled_img = random_roll(img , 512
    gradients  = tf.zeros_like(rolled_img)

    xs = tf.range(0 , rolled_img.shape[0] , tile_size)[:-1]
    if not tf.cast(len(xs) , tf.bool) :
      xs = tf.constant([0])

    ys = tf.range(0 , rolled_img.shape[1] , tile_size) [:-1]
    if not tf.cast(len(ys) , tf.bool) : 
      ys = tf.constant([0])

    for x in xs : 
      for y in ys: 
        with tf.GradientTape() as tape : 
          tape.watch(rolled_img) 
          loss  = loss_func(rolled_img[x:x+tile_size,y:y+tile_size],self.model)

        gradients = gradients + tape.gradient(loss , rolled_img)

#NOTICE THE -shift BELOW  shift = - shift. ####
    gradients = tf.roll(gradients,shift = -shift , axis = [0,1])
    gradients /= tf.math.reduce_std(gradients) + 1e-8

    return gradients

get_tiled_gradients = TiledGradient(deepdreambasemodel)    


def run_deep_dream_with_octaves(img,steps_per_octave = 100 , step_size = 0.01 , 
                          octaves = range(-2,3),
                          octave_scale = 1.30) :
  base_shape = tf.shape(img) 
  img = tf.keras.preprocessing.image.img_to_array(img) 
  img = tf.keras.applications.inception_v3.preprocess_input(img) 

  initial_shape = img.shape[:-1]
  img = tf.image.resize(img , initial_shape)

  for octave in octaves : 
    new_shape = (tf.cast(tf.convert_to_tensor(base_shape[:-1]) , tf.float32)) \
                     *(octave_scale ** octave)
    img = tf.image.resize(img, tf.cast(new_shape , tf.int32))

    for step in range(steps_per_octave) : 
      gradient = get_tiled_gradients(img) 
      img = img + gradient * step_size
      img = tf.clip_by_value(img , -11

      if step % 10 == 0:
        show(deprocess(img))
        print ("Octave {}, Step {}".format(octave, step))
  return deprocess(img) 


img = run_deep_dream_with_octaves(img=original_image, step_size=0.01)


img = tf.image.resize(img, base_shape)
img = tf.image.convert_image_dtype(img/255.0, dtype=tf.uint8)
show(img)  





Also tried on one more image https://i.imgur.com/8VcI2u0.jpg





 using Microsoft.AspNetCore.Mvc; using System.Xml.Linq; using System.Xml.XPath; //<table class="common-table medium js-table js-stre...