If you like this article, considere buy me a cofee! 😉
Crypto donation button by NOWPayments

In iOS 11 Apple integrated a library called Vision. This library use algorithms to perform a series of tasks on images and video (text detection, barcodes, etc.). Now, with iOS 13, Apple has published a new library, VisionKit, that allows you to use the document scanner of the system itself (the same one that uses the Notes application). Now let’s see how you can develop your own OCR in iOS 13 with VisionKit.

Project start

In order to check how we can scan a document and recognize its content, we create a project in Xcode 11 (remember that VisionKit only works on iOS 13+). This project can be found complete on GitHub).

As we are going to use the camera of the device to scan the documents, the operating system will show us a message in which it asks us for permission to use that camera. If we do not want an error to occur and the application to be closed, we must notify the application that we will need the camera.

To do this, in the Info.plist file we add the key ‘Privacy – Camera Usage Description‘, along with a text that will be the one that is displayed to the user when they ask for permission (for example: “To be able to scan documents you must allow the use of the camera.“).

Info.plist where the camera usage request key has been added.
Ask for permission.

If permission is denied, when we want to scan, the following message will appear:

Interface design

This project will basically consist of a UIImageView component, in which we will show the scanned document with the recognized text, an UITextView component to show the text that the scanner has recognized, and a UIButton component to activate the document scanning. In this project, I will do all this through code, without using storyboards or .xib files.

Project interface design.

Interface programming

First we create the ScanButton component:

Then the ScanImageView component:

And finally, the OcrTextView component:

Now we call them from the ViewControlller controller and position them on the screen:

Presentation of the scanning controller (VNDocumentCameraViewController)

In order to present the controller that will allow us to scan the document, we must create and present an instance of the VNDocumentCameraViewController class.

At the end of the configure method we add the following code, which allows us to call the scanDocument() method:

After the configure() method we create the scanDocument() method:

As you can see, @objc has been added in front of the function, because although we are programming in swift, #selector is a method of objective-c.

In addition, the VNDocumentCameraViewController class presents the VNDocumentCameraViewControllerDelegate protocol (which we have called in scanVC.delegate = self), so we can implement its methods. We do this in an extension of the ViewController class to have the code more organized:

The first method, documentCameraViewController (_ controller: VNDocumentCameraViewController, didFinishWith scan: VNDocumentCameraScan), is called when we have scanned one or more pages and saved them (Keep Scan first and then Save).

The scan object (VNDocumentCameraScan) contains three parameters:

  • pageCount: is the (Int) number of pages scanned.
  • imageOfPage(at index: Int): is the image (UIImage) of the page in the indicated index.
  • title: is the title (String) of the scanned document. Once we have verified that one or more documents have been scanned, before removing the controller, we pass the scanned image to the scanImageView component.

The second method, documentCameraViewController (_ controller: VNDocumentCameraViewController, didFailWithError error: Error), is called when an error occurs when scanning the document, so it is at this point that we must perform some error management action (for example, in if the error is due to the fact that the user has not given permission to use the camera, we can show an alert message asking to activate the permission).

The third method, documentCameraViewControllerDidCancel (_ controller: VNDocumentCameraViewController), is called when the Cancel button of the VNDocumentCameraViewController controller is clicked. Here we will only dismiss the controller.

Text recognition

Now, in order to recognise and extract the text of the documents we have scanned, we will use the Apple Vision framework, already integrated into iOS 11. Specifically, we will use the VNRecognizeTextRequest class. This class, as the documentation indicates, searches and recognizes the text in an image. For this process we will need a request (instance of the VNRecognizeTextRequest class), in which we can define the text recognition parameters:

  • ustomWords. They are a set of words defined by us to complement those in the dictionary and that will be used during the recognition stage (for example, names, marks, etc.).
  • minimumTextHeight. It is the minimum height of the text (with respect to that of the image) from which the recognition of the text will take place. As Apple indicates in its documentation:

Increasing the size reduces memory consumption and expedites recognition with the tradeoff of ignoring text smaller than the minimum height. The default value is 1/32, or 0.03125.

In this project we will apply, as an example, some of these parameters:

At this point, we create the configureOCR() function, which will be the one that contains the functionality to analyze, recognise and extract the text from the image:

We will call this function in the viewDidLoad() after the configure() method. What we do in this function is to create an instance of VNRecognizeTextRequest that only contains one argument, completionHandler, which is called every time text is detected in an image.

At this point the process that occurs is:

  • First we check that request.results contains a list of observations (of the type VNRecognizedTextObservation), which correspond to the lines, sentences … that the Vision library has detected.
  • Next, we iterate over this list of observations. Each of these observations is made up of a series of possible candidates of what the recognized text may be, each of them with a certain level of confidence. We choose the first candidate, and add it to a text string.
  • Finally we show in the OcrTextView element that we have created the principle the text obtained (remember to do it in the main thread, that’s why we use Dispatch.main.async).

Image processing

Finally, we only have to process the image captured by the scanner. For them we create a function that will take a parameter of type UIImage (the captured image), and will create an instance of type VNImageRequestHandler, which is where we will pass the ocrRequest intance that we created at the beginning:

As the documentation indicates, to instantiate this type we need to use CGImage, and not UIImage (since it works with Core Graphics), so we get that parameter from the image we have passed.

We can also pass a list of options of the VNImageOption type (which describe specific properties of the image or how it should be treated), although in this case we will not pass any.

Finally, we apply the text recognition request (ocrRequest). This method, processImage (_ image: UIImage), will be called at the end of the documentCameraViewController (_ controller: VNDocumentCameraViewController, didFinishWith scan: VNDocumentCameraScan) method and just before dismissing the controller with controller.dismiss(animated: true).

Scanner test

Now we can test the application. To do this we turn it on and capture an image.

As you can see, it perfectly recognizes the text of the image.


As we have seen, thanks to the Vision and VisionKit libraries we can easily build our own document scanner on our mobile. Remember that you can download the entire project on GitHub.


Gclub · 4 August 2020 at 17:49

Thanks for some other informative blog. Where else may just
I am getting that type of information written in such a perfect way?

I have a challenge that I’m just now running on, and I’ve been on the glance out for such information.

game successfully · 12 August 2020 at 06:10

I your writing style genuinely enjoying this web site.

Gclub · 13 August 2020 at 08:53

Wonderful beat ! I wish to apprentice whilst you amend your web site, how can i subscribe for a blog site?

The account aided me a acceptable deal. I were tiny bit acquainted of this your broadcast provided vivid clear concept.

Gclub · 13 August 2020 at 22:43

I love what you guys are usually up too. This kind of clever work and reporting!

Keep up the very good works guys I’ve added you guys to my
personal blogroll.

Gclub · 19 August 2020 at 04:21

Great work! That is the type of information that are supposed to be shared
across the net. Shame on Google for no longer positioning this put
up upper! Come on over and visit my website . Thank you

Demetra · 15 September 2020 at 11:36

Asking questions are actually fastidious thing if you are not understanding something completely, but this article provides fastidious understanding even.

Gclub · 12 October 2020 at 12:25

As I website possessor I believe the content matter here is rattling fantastic , appreciate it for your hard work.
You should keep it up forever! Best of luck.

Leave a Reply

Your email address will not be published. Required fields are marked *

Follow on Feedly