Reading a Credit Card Number in Python

The purpose of this tutorial is to help you build a credit menu reader with OpenCV and machine learning techniques to place the bill of fare number and the carte du jour type.

Let us get started!

Also read: How to read Images in Python using OpenCV?


Introduction to OCR

We have e'er seen Optical character recognition being used a lot in machine learning and deep learning. Ane of many such applications includes the identification and reading of credit cards and the carte number.

Credit Card Reader Demonstration
Credit Card Reader Sit-in

The question that might come to your listen is Why? So this application could be of great help to banks and other fiscal institutions to digitally recognize the carte numbers and type of card.


Implementation of a Credit Card Reader in Python

Now that we have understood the concept and what we are going to build past end of this tutorial.

Let'due south starting time building the project pace by step.


Footstep one: Importing Modules

We'll exist working withnumpy and matplotlib forth with the openCV module in this case.

import cv2 import imutils import argparse import numpy every bit np from imutils import contours from matplotlib import pyplot as plt                

Pace 2: Assign Menu type

The carte blazon is assigned according to the first digit of the card number. The same is displayed below.

FIRST_NUMBER = {     "3": "American Express",     "4": "Visa",     "5": "MasterCard",     "six": "Discover Bill of fare"}                

Footstep 3: Loading and pre-processing of reference Paradigm

In guild to read the reference OCR image, we make use of the imread part. The reference epitome contains the digits 0-ix in the OCR-A font which can exist subsequently on used to perform matching after in the pipeline.

The pre-processing of the epitome includes converting information technology to a grayness image and then thresholding + inverting the image to become the binary inverted image.

ref = cv2.imread('ocr_a_reference.png') ref = cv2.cvtColor(ref, cv2.COLOR_BGR2GRAY) ref = cv2.threshold(ref, 10, 255, cv2.THRESH_BINARY_INV)[1]                

Footstep 4: Detecting Contours

In this footstep, we detect the contours present in the pre-processed image and then store the returned contour information. Side by side, we sort the contours from left-to-correct too every bit initialize a dictionary, digits, which map the digit name to the region of interest.

refCnts = cv2.findContours(ref.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE) refCnts = imutils.grab_contours(refCnts) refCnts = contours.sort_contours(refCnts, method="left-to-correct")[0] digits = {}                

Stride v: Creating Bounding Boxes around Digits

At present in this step, nosotros loop through the paradigm contours obtained in the previous pace where each value holds the digit/number forth with the contour information. We further compute a bounding box for each contour and store the (x, y) coordinates forth with the height and width of the box computed.

for (i, c) in enumerate(refCnts):     (x, y, w, h) = cv2.boundingRect(c)     roi = ref[y:y + h, x:x + due west]     roi = cv2.resize(roi, (57, 88))     digits[i] = roi      rectKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (9, 3)) sqKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (v, 5))                

Step 6: Loading and Pre-processing of the Credit carte Image

In this step, nosotros load our photo of the credit menu and then resize the same to the width of 300 in order to maintain the aspect ratio.

This stride is followed by converting the image to grayscale. After this, we perform morphological operations on the grayscale image.

The adjacent step is to compute a Scharr gradient and store the consequence as gradX. And so we summate the absolute value of the gradX array stored. We aim to calibration all the values in the range of 0-255 .

Now this normalizing of values take place past computing the minimum and maximum value of the gradX and grade an equation to achieve min-max normalization .

Finally, nosotros observe the contours and store them in a list and initialize a listing to agree the digit group locations. And so loop through the contours the aforementioned manner we did for the reference epitome in stride v.

Next, we'll sort the groupings from left to right and initialize a list for the credit carte du jour digits.

image = cv2.imread('credit_card_03.png') prototype = imutils.resize(epitome, width=300) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)  tophat = cv2.morphologyEx(gray, cv2.MORPH_TOPHAT, rectKernel)  gradX = cv2.Sobel(tophat, ddepth=cv2.CV_32F, dx=1, dy=0, ksize=-ane) gradX = np.absolute(gradX) (minVal, maxVal) = (np.min(gradX), np.max(gradX)) gradX = (255 * ((gradX - minVal) / (maxVal - minVal))) gradX = gradX.astype("uint8")  gradX = cv2.morphologyEx(gradX, cv2.MORPH_CLOSE, rectKernel) thresh = cv2.threshold(gradX, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)[i]  thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, sqKernel)  cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, 	cv2.CHAIN_APPROX_SIMPLE) cnts = imutils.grab_contours(cnts) locs = []  for (i, c) in enumerate(cnts): 	(ten, y, w, h) = cv2.boundingRect(c) 	ar = w / float(h) 	if ar > two.v and ar < 4.0: 		if (west > 40 and w < 55) and (h > ten and h < twenty): 			locs.append((x, y, w, h))  locs = sorted(locs, primal=lambda x:x[0]) output = []                

Now that we know where each group of iv digits is, allow's loop through the four sorted groupings and determine the digits therein. The looping includes thresholding, detecting contours, and template matching too.

for (i, (gX, gY, gW, gH)) in enumerate(locs):     groupOutput = []     group = gray[gY - v:gY + gH + 5, gX - 5:gX + gW + 5]     grouping = cv2.threshold(group, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]     digitCnts = cv2.findContours(grouping.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)     digitCnts = imutils.grab_contours(digitCnts)     digitCnts = contours.sort_contours(digitCnts, method="left-to-correct")[0]     for c in digitCnts:         (x, y, w, h) = cv2.boundingRect(c)         roi = group[y:y + h, 10:x + w]         roi = cv2.resize(roi, (57, 88))         scores = []         for (digit, digitROI) in digits.items():             upshot = cv2.matchTemplate(roi, digitROI, cv2.TM_CCOEFF)             (_, score, _, _) = cv2.minMaxLoc(outcome)               scores.append(score)         groupOutput.append(str(np.argmax(scores)))     cv2.rectangle(epitome, (gX - v, gY - 5),         (gX + gW + 5, gY + gH + 5), (0, 0, 255), 2)     cv2.putText(image, "".bring together(groupOutput), (gX, gY - 15),         cv2.FONT_HERSHEY_SIMPLEX, 0.65, (0, 0, 255), 2)     output.extend(groupOutput)                

Step 7: Displaying the Final Results

The code below volition brandish the terminal Card Type, Card Number, and the OCR applied epitome.

print("Credit Card Type: {}".format(FIRST_NUMBER[output[0]])) impress("Credit Card #: {}".format("".bring together(output)))  plt.imshow(cv2.cvtColor(paradigm, cv2.COLOR_BGR2RGB)) plt.title('Image'); plt.show()                

The Final Code

import cv2 import imutils import argparse import numpy equally np from imutils import contours from matplotlib import pyplot equally plt  FIRST_NUMBER = {     "3": "American Limited",     "4": "Visa",     "5": "MasterCard",     "half-dozen": "Notice Bill of fare"}  ref = cv2.imread('ocr_a_reference.png') ref = cv2.cvtColor(ref, cv2.COLOR_BGR2GRAY) ref = cv2.threshold(ref, 10, 255, cv2.THRESH_BINARY_INV)[one]  refCnts = cv2.findContours(ref.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE) refCnts = imutils.grab_contours(refCnts) refCnts = contours.sort_contours(refCnts, method="left-to-right")[0] digits = {}  for (i, c) in enumerate(refCnts):     (x, y, w, h) = cv2.boundingRect(c)     roi = ref[y:y + h, ten:x + w]     roi = cv2.resize(roi, (57, 88))     digits[i] = roi      rectKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (9, iii)) sqKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5, 5))  paradigm = cv2.imread('credit_card_03.png') image = imutils.resize(image, width=300) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)  tophat = cv2.morphologyEx(gray, cv2.MORPH_TOPHAT, rectKernel)  gradX = cv2.Sobel(tophat, ddepth=cv2.CV_32F, dx=1, dy=0, ksize=-1) gradX = np.absolute(gradX) (minVal, maxVal) = (np.min(gradX), np.max(gradX)) gradX = (255 * ((gradX - minVal) / (maxVal - minVal))) gradX = gradX.astype("uint8")  gradX = cv2.morphologyEx(gradX, cv2.MORPH_CLOSE, rectKernel) thresh = cv2.threshold(gradX, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]  thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, sqKernel)  cnts = cv2.findContours(thresh.re-create(), cv2.RETR_EXTERNAL, 	cv2.CHAIN_APPROX_SIMPLE) cnts = imutils.grab_contours(cnts) locs = []  for (i, c) in enumerate(cnts): 	(x, y, w, h) = cv2.boundingRect(c) 	ar = west / float(h) 	if ar > ii.5 and ar < 4.0: 		if (w > 40 and west < 55) and (h > 10 and h < xx): 			locs.append((x, y, w, h))  locs = sorted(locs, central=lambda 10:10[0]) output = []  for (i, (gX, gY, gW, gH)) in enumerate(locs):     groupOutput = []     grouping = gray[gY - 5:gY + gH + 5, gX - five:gX + gW + v]     group = cv2.threshold(grouping, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]     digitCnts = cv2.findContours(group.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)     digitCnts = imutils.grab_contours(digitCnts)     digitCnts = contours.sort_contours(digitCnts, method="left-to-right")[0]     for c in digitCnts:         (10, y, due west, h) = cv2.boundingRect(c)         roi = group[y:y + h, x:ten + w]         roi = cv2.resize(roi, (57, 88))         scores = []         for (digit, digitROI) in digits.items():             result = cv2.matchTemplate(roi, digitROI, cv2.TM_CCOEFF)             (_, score, _, _) = cv2.minMaxLoc(result)               scores.append(score)         groupOutput.suspend(str(np.argmax(scores)))     cv2.rectangle(image, (gX - 5, gY - v),         (gX + gW + 5, gY + gH + 5), (0, 0, 255), 2)     cv2.putText(epitome, "".join(groupOutput), (gX, gY - xv),         cv2.FONT_HERSHEY_SIMPLEX, 0.65, (0, 0, 255), two)     output.extend(groupOutput)      print("Credit Card Blazon: {}".format(FIRST_NUMBER[output[0]])) print("Credit Card #: {}".format("".join(output)))  plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) plt.title('Epitome'); plt.testify()                

Some Sample Outputs

Now permit'south look at some sample outputs subsequently implementing the code mentioned above on various credit carte du jour images.

Credit Card Reader Output 01
Credit Card Reader Output 01
Credit Card Reader Output 02
Credit Bill of fare Reader Output 02
Credit Card Reader Output 03
Credit Card Reader Output 03

Determination

I hope yous understood the concept and loved the outputs. Try out the same with more images and get amazed with the results.

Happy Coding! 😇

Want to larn more? Bank check out the tutorials mentioned below:

  1. Python: Detecting Contours
  2. Boxplots: Edge Detection in Images using Python
  3. Prototype Processing in Python – Edge Detection, Resizing, Erosion, and Dilation

westhadioncoulne37.blogspot.com

Source: https://www.askpython.com/python/examples/opencv-credit-card-reader

0 Response to "Reading a Credit Card Number in Python"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel