
VeriLook SDK
Face identification for stand-alone or client-server applications
VeriLook facial identification technology is designed for biometric systems developers and integrators. The technology assures system performance and reliability with live face detection, simultaneous multiple face recognition and fast face matching in 1-to-1 and 1-to-many modes.
Available as a software development kit that allows development of stand-alone and network-based solutions on Microsoft Windows, Linux, macOS, iOS and Android platforms.
Reliability Tests
We present the testing results to show the template verification and face liveness check reliability evaluations for the VeriLook 12.3 algorithm.
Template Verification Reliability Tests
The following public datasets were used for the VeriLook 12.3 algorithm face recognition reliability evaluations:
-
NIST Special Database 32 - Multiple Encounter Dataset (MEDS-II).
- All full-profile face images from the dataset were removed because they are not supported by VeriLook SDK. This resulted in 1,216 images of 518 persons.
-
University of Massachusetts Labeled Faces in the Wild (LFW).
- According to the original protocol, only 6,000 pairs (3,000 genuine and 3,000 impostor) should be used to report the results. But recent algorithms are "very close to the maximum achievable by a perfect classifier" [source]. Instead, as Neurotechnology algorithms were not trained on any image from this dataset, verification results on matching each pair of all 13,233 face images of 5,729 persons were chosen to be reported.
- All identity mistakes, which had been mentioned on the LFW website, were fixed. Also, several not mentioned issues were fixed.
- Some images from the LFW dataset contained multiple faces. The correct faces for assigned identities were chosen manually to solve these ambiguities.
-
CASIA NIR-VIS 2.0 Database.
- The dataset contains face images, which were captured in visible light (VIS) and near-infrared (NIR) spectrums. According to the original protocol, VeriLook algorithm testing used VIS images as gallery, and NIR images as probe.
- According to the original protocol, the dataset is split into two parts – View1 intended for algorithm development and View2 for performance evaluation. Neurotechnology algorithms were not trained on any image from this dataset. Only View2 part with 12,393 NIR images and 2,564 VIS images was used for face verification evaluation.
- The non-cropped images (640 x 480 pixels) from the dataset were used for VeriLook algorithm testing.
Two experiments were performed with each dataset:
- Experiment 1 maximized matching accuracy. VeriLook 12.3 algorithm reliability in this test is shown on the ROC charts as blue curves.
- Experiment 2 maximized matching speed. VeriLook 12.3 algorithm reliability in this test is shown on the ROC charts as red curves.
Receiver operation characteristic (ROC) curves are usually used to demonstrate the recognition quality of an algorithm. ROC curves show the dependence of false rejection rate (FRR) on the false acceptance rate (FAR). Equal error rate (EER) is the rate at which both FAR and FRR are equal.
MEDS-II dataset
![]() Click to zoom |
LFW dataset
![]() Click to zoom |
NIR-VIS 2.0 dataset
![]() Click to zoom |



VeriLook 12.3 algorithm testing results with face images from public datasets | ||||||
---|---|---|---|---|---|---|
MEDS-II | LFW | NIR-VIS 2.0 | ||||
Exp. 1 | Exp. 2 | Exp. 1 | Exp. 2 | Exp. 1 | Exp. 2 | |
Image count | 1216 | 13233 | 14957 | |||
Subject count | 518 | 5729 | 725 | |||
Session count | 1 - 18 | 1 - 530 | 4 | |||
Image size (pixels) | variable | 250 x 250 | 480 x 640 | |||
Template size (bytes) | 322 | 322 | 322 | 322 | 322 | 322 |
EER | 0.2256 % | 0.2627 % | 0.0080 % | 0.0147 % | 0.0413 % | 0.0538 % |
FRR at 0.1 % FAR | 0.2268 % | 0.2721 % | 0.0029 % | 0.0041 % | 0.0129 % | 0.0436 % |
FRR at 0.01 % FAR | 0.2268 % | 0.3175 % | 0.0078 % | 0.0194 % | 0.1291 % | 0.1678 % |
FRR at 0.001 % FAR | 0.2268 % | 0.3175 % | 0.0281 % | 0.0454 % | 0.7455 % | 0.9262 % |
Face Liveness Check Reliability Tests
Neurotechnology's internally collected dataset was used for testing the face liveness check algorithm. The dataset contained:
- 27,483 real samples.
-
18,060 attack samples, which included these scenarios for spoofing face liveness check:
- Screen-based: phone, laptop, tablet and PC screens were used.
- Paper photo-based: regular laser printer paper, photo paper and matte paper was used.
- 3D mask-based: off-the-shelf carnival masks were used.
Receiver operation characteristic (ROC) curves are usually used to demonstrate the accuracy of a biometric algorithm. A ROC curve shows the dependence of Bona fide Presentation Classification Error Rate (BPCER) on the Attack Presentation Classification Error Rate (APCER). Equal error rate (EER) is the rate at which both APCER and BPCER are equal.

VeriLook 12.3 liveness check algorithm testing results with Neurotechnology internal dataset | |
---|---|
EER | 0.1702 % |
BPCER at 10 % APCER | 0.0109 % |
BPCER at 1 % APCER | 0.0182 % |
BPCER at 0.1 % APCER | 0.7606 % |