Title
Liveness.com - Biometric Liveness Detection Explained
Go Home
Category
Description
Information about Vendors, Methods, Spoof Bounties, and Testing. In biometrics, Liveness Detection is an AI computer system’s ability to determine that it is interfacing with a physically present human being and not an inanimate spoof artifact.
Address
Phone Number
+1 609-831-2326 (US) | Message me
Site Icon
Liveness.com - Biometric Liveness Detection Explained
Page Views
0
Share
Update Time
2022-05-23 03:25:13

"I love Liveness.com - Biometric Liveness Detection Explained"

www.liveness.com VS www.gqak.com

2022-05-23 03:25:13

Liveness.comBiometric Liveness Detection Explained(aka DefeatSpamBots.com) What is “Liveness”?In biometrics, Liveness Detectionisacomputer's ability to determine thatit is interfacing with aphysically present human beingand not a bot, inanimate spoof artifact orinjected video/data.Remember: It’s not “Liveliness”. Don’t make that rookie mistake!The History of LivenessIn 1950, Alan Turing (wiki) developed the famous "Turing Test".It measuresa computer's ability to exhibit human-like behavior.Conversely, Liveness Detection is AIthat determines if a computer is interacting with a live human.Turing c. 1928The "Godmother of Liveness"Dorothy E. Denning (wiki) is a member of the National Cyber Security Hall of Fameandcoined the term “Liveness” in her 2001 Information Security Magazine Article: It's "liveness," not secrecy, that counts. She states:“A good biometrics system should not depend on secrecy," and, “... biometric prints need not be kept secret, but the validation process must check for liveness of the readings."Decades ahead of her time, Dorothy E. Denning’s vision forLiveness Detectioninbiometric face verification could nothave been more correct.How Liveness Verification Protects UsMs. Denning's 2D photo posted above is biometric data, and it is now cached on your computer.Is she somehow more vulnerable now that you have a copy of her photo?Not if heraccounts are secured with strong 3D Liveness Detection because thephotowon't fool 3DLivenessAI.Nor will a video, a copy of her driver license, passport, fingerprint, or iris. In fact, she must be physicallypresent toaccessheraccounts, so she need not worry about keeping herbiometric data "secret."3D Liveness Detection defeats spam bots and prevents bad actors from using stolen photos, injected deepfake videos, life-like masks, or other spoofsto create or access online accounts.Liveness ensuresonlyreal humans can create and access accounts, and by doingso, Liveness checkssolvesome very serious problems.For example,Facebook had to delete 5.4 billion fake accounts in 2019 alone!Requiring proof of Liveness would have prevented these fakesfrom ever being created. Level 1-5 Threat Vectors- SpoofArtifact & Bypass TypesWhen a non-living object that exhibits human traits (an "artifact") is presented to a camera or biometric sensor, it's called a "spoof." Photos, videos on screens, masks, and dolls are all common examples of spoof artifacts. When biometric data is tampered with post-capture, or the camera is bypassed altogether, that is called a "bypass." A deepfake puppet injected into the camera feed is an example of a bypass. There are no NIST/NLVAP lab tests available for PAD Level 3, or Levels 4 & 5 bypasses, as those attack vectors are missing from the ISO 30107-3 standard and thus all associated lab testing. Only a Spoof Bounty Program cancurrently addressLevels 1-5.Artifact TypeDescriptionExampleLevel 1 (A)(Spoof Bounty Avail)Hi-res paper & digital photos, hi-def challenge/response videos and paper masks. Beware: iBeta Lab Tests DO NOT include deepfake puppet attacks, but FaceTec's Spoof Bounty DOES include deepfake puppets.Level 2 (B)(Spoof Bounty Avail)Commercially available lifelike dolls, and human-worn resin, latex & silicone 3D masks under $300 in price.Level 3 (C)(Spoof Bounty Avail)Custom-made ultra-realistic 3D masks, wax heads, etc., up to $3,000 in creation cost.Bypass TypeDescriptionExampleLevel 4(Spoof Bounty Avail)Decrypt & edit the contents of a 3D FaceMap™ to contain synthetic data not collected from the session, have the Serverprocess and respond with Liveness Success. Level 5(Spoof Bounty Avail)Take over the camera feed & inject previously captured video frames or adeepfake puppet that results in the FaceTec AIresponding with "Liveness Success."Spoofs & Zero-Day Exploits ofLiveness VendorsA Russian hacker called "White Usanka" has created videos showing how he canexploit weaknesses inLivenessand Remote ID Proofing softwareusing free or very low cost methods.The below videos explain how touse free tools like Spark AR, an Instagram filter creation app, to create head "movement" and/or simulate random color flashing.Spoofing iProov with Generated.photos, Spark AR & OBS(2021)Spoofing Innovatricswith Generated.photos& FaceSwap(2022)Spoofing Sum & Substance withSpark AR& Photoshop(2021)Spoofing Shufti Pro with Veriff.tools&Generated.photos(2021)These vendors continue trying to make their software more secure, so these exact attacks may not work forever, or even at the time you are reading this,but when weak 2D Liveness is used, therealways seemto be ways to beat the system.These videos show the incredibly difficult challenges that Liveness Detection and ID Proofing vendors are up against in the real world, and it also shows why these vendors don't haveSpoof Bounty Programslike FaceTec's.The success of the techniquesin thesevideos prove that very few vendors are truly up to the task, despite many having been handed iBeta "conformances."Please note that White Usanka spent significant amounts oftime attacking FaceTec's Spoof Bounty Programbut has beenunable to spoof or bypass the system, even with these techniques.The Achilles Heel ofWeakLivenessSome Liveness Detection methods, like those used above,will never be secure because they do not captureenough unique data to confirm the session is not being somehow faked.For example, if a 4Kmonitor presents a video to a device with a low-res 2D camera, if no glare or skew is observed, it is virtually impossible for the camera to determine if the 4Kmonitor is showing a spoof. The camera is capturing lower-res than the screen presenting it, and theweak liveness algorithmswillbe fooled.WeakLiveness Detection methods include: blink, smile, turnhead, flashing lights, make random faces, speak random numbers, and more. All are fairly easy to spoof withhigher-than-camera-resolution monitors orvirtual cameras,with workarounds for randomness needed in some cases.User security and hard-won corporate credibility are put at risk by unscrupulous vendors' exaggerated Liveness security claims.Ask vendors howthey canclaim"Robust Liveness Detection," but don't even have a public demo, let alone aSpoof Bounty Program.Note: Watch USAA Bank's Blink"Selfie-Recognition" app securitygetspoofed by a crude photo slideshow, easily unlockingone of their user's bank accounts-------------------> Deepfakes AreHere,and The Threat Is Very RealSo-called "deepfakes" have been around for some time, but now even the general public understands that digital media can be manipulatedeasily.If the Liveness Detection techis vulnerable to deepfakes derived from a photoor a video,it cannot be used for serious biometric security.Hackers stole 500m Yuan ($76.2m) by beating weak 2D, Active Liveness with stolen high-def photos they turned into deepfake puppets using a free app. The puppets made it look like the users were responding to commands by turningtheir heads, blinking, and opening their mouths, while in reality the deepfake video frames were being injected directly into the camerafeed.Note: Watch as a basic "deepfake" puppet iscreated in 20 sec and can bespoof many of theliveness vendors-------------------> Realistic Deepfake Puppet from a Single PhotoiBeta DOES NOT TEST for Deepfakes of video injection, but FaceTec catches these attacks because of itsSpoof Bounty Program experience.If Liveness Detection is vulnerable to deepfake spoofs derived from photos or videos,it cannot be used for serious biometric security.Sophisticated fraud is here and 99% of so-called "Liveness" Vendors aren't ready for it. Remember, iBeta PAD testingdoes not covervideo injection or deepfakes, and both are now used in real-world attacks.Note: Watch as a professional level "deepfake" puppet iscreated from asingle photo that can spoof manyliveness vendors-------------------> APIs &100% Server-side "Liveness" Can't Stop DeepfakesA recent academic paperfrom Pennsylvania State University, Zhejiang University, and Shandong University delves into the weaknesses inLiveness APIs &Server-side only Liveness.In these instances, the vendor's software is only running on the Server, so it can't confirm that there is a real camera feed capturing live images for the user's face. This allows replayed face images to be injected, and the Server-side software is fooled completely.While many vendors have touted "Server-side Liveness Detection"as being easy to integrate, this paper shows it is essentiallyuseless for biometric security.These deepfakes are also now fooling live video chat operators. They look so real that many interviewers don't even realize they are talking to a fraudster with a synthetic face that is being overlaid onto the fraudsters real facein real-time.A Device SDK MUST be running on the User's Device to secure the camera feed and prevent deepfakeinjection. But remember,just having software on the user's device is not a panacea;rejecting video injection attacks is VERY difficult, and iBeta PAD does not test for these types of attacks... So unless yourLiveness Vendor has a Spoof Bounty Program that includesLevel 5 video injection, you can'thaveconfidence that their software can block deepfakeinjection. Note: Watch how DeepFaceLive replaces a living user's face with a deepfakeof Arnold Schwarzenegger in real time -------------------> 2022 ENISA Remote IdentityProofing GuidelinesThis report is the most recent and up-to-date analysis of threats to remote identity proofing systems. It highlights attacks that are very viable, yet still aren't acknowledged by iBeta, or in the ISO 30107-3 PAD standard.These attack vectors include Level 4 & 5 attacks, like Deepfake Puppets and Video Injection.Get the 2022 ENISA Remote ID Proofing Guidelines - Download Here Liveness.com Lists Free 2DLiveness Detection Providers BelowFaceTec's 2D Liveness Checks are ~98% accurate against Level 1-3 Spoof Attack Vectors (but not Level 4 &5), so they are nowhere nearas secure as 3D Liveness (+99.999%accurate) with a Device SDK, but there are scenarios where 2D Liveness Checks add some security; for example, at a Customs Checkpoint in an airport, or at a semi-supervised retail store's self-checkout. In these scenarios a fraudster is unlikely to be able to use a deepfake image or bypass the camera toinject a recorded video.2D Liveness doesn't require a Device SDK or special user interface. It works on any mugshot-style 2D face photo, for FaceTec Customers and Partners, the number of FREE 2D Liveness checksis unlimited, and theimages are processed100% on the Customer's Server.Free2DLiveness Detection isprovidedto ALLFaceTecCustomers & Partners, so you can contact ANY*Certified FaceTec Providerand ask them about Free 2D Liveness Detection, or visit the2D Passive Liveness CheckDevelopers page for more information.*Participation byFaceTec Distribution Partners may vary.FaceTec Certified 3D Liveness Providers3D Liveness Detection is much stronger than 2D, and to prove it, FaceTec created a$200,000 Spoof Bounty Programto rebuffLevel 1,2 & 3 PAD attacks, and Level 4 & 5 Template Tampering, andVirtual-Camera & Video Injection Attacks.Organizations have a fiduciary duty to provide the strongest Liveness Detection when they use remote onboarding, identity verificationor face authentication. FaceTec will perform over500,000,000 3D Liveness Checks in 2022 alone! FaceTec Certified 3D Liveness VendorsWith security powered & provenby the$200,000 Spoof Bounty Program& NIST/NVLAP Lab CertifiedPAD:Level 1 & 2AI* 01 SystemsACMGActiveITAikaki - PicLoqAnylineAutentikarAuthenteqAvartaBatelco - Public2Blockchains.comBryk GroupBTS DigitalBy EvolutionCertisignCivicCummings SolutionsCynopsise4 GlobalEdgesoftEvidentIDEvrotrustFaceTecFintechOSFractalGameChange SolutionsGulf Data - gDiHumaNodeIDdatawebIdenfyIdentyumIDnowImagine TechnologiesIn SolutionsIntellect Design ArenaIntellicheckIQSECJourney.aiKaralundiKeydocKvalifikaLynx GlobalMadaniMicroblinkNA-ATNECNetsNeuvoteODEKOndatoOnfidoOneyTrustOpes OnePassbasePBSAGroupPixel DesignBiometridQundoSahalScytálesSignicatSmartOSCSocialnetSoft Baked AppsSuperguardSweeft DigitalSynapsT2P CoTamkeem TechTiC NowTekbeesTeraSystemU-PaymentsUnicoValidVerifyMyAgeVeriTranVNGWeb Data DomeXplorYotiZealiDZiphiiBahrainUkraineChileIndiaAustriaChileIcelandSingaporeBahrainUSAAustraliaKazakhstanSpainBrazilUSAUSASingaporeSouth AfricaEcuadorUSABulgariaUSARomaniaGermanySingaporeUAEGeorgiaUSALithuaniaCroatiaGermanyJordanPeruIndiaUSAMexicoUSAMexicoMexico & USAGeorgiaHong KongIndonesiaCroatiaMexicoLATAMDenmarkCanadaSouth AfricaLithuaniaUK & USAFranceUSAUSASouth AfricaKuwaitPortugalGermanyPakistanSwedenNorwayVietnamArgentinaGermanyAustraliaGeorgiaFranceThailandIraqChileColombiaPhilippinesChileBrazilBrazilUnited KingdomArgentinaVietnamBrazilPanamaUnited KingdomSwedenNigeriaIncentivized public bypass testingfor Template Tampering, Level 1-3 Presentation,Video Replay, Deepfake Injection, Virtual Camera, andMIPI AdapterAttacks. *Vendors listed above have not havebeen individually tested bya NVLAP/NIST accredited labfor Level 1&2 PresentationAttacks, they are distributingFaceTec's software, which has had v6.9.11Certified to Level 2 +Level 1 regression testing.Liveness for Onboarding, KYC, and Face VerificationRequiring everynew user to prove their 3DLiveness before they are asked topresentan ID Document during digital onboarding is in itself a huge deterrentto fraudsters who don't ever want their real faces on camera.If an onboarding system hasa weakness, bad actors will exploit it to create as many fake accounts as possible.To prevent this, strong Liveness Detection during new account onboarding should be required. Once its proven that the new account belongs to a real human, their biometric data can bestored as a trusted reference of theirdigital identity in the future. Liveness for Ongoing Face Authentication (Password/PKI Replacement)Since most biometric attacks are spoof attempts or video injected using virtual camera software, strong 3D Liveness Detection during user authentication should be mandatory.With multiple high-quality photos of almost everyone available on Google or Facebook, a biometric authenticator cannot rely on secrecy for its security.3D Liveness Detection is thefirst and most important line of defense against targeted spoof attacks on remoteidentification and authentication systems.The second line of defense isa high FAR (see Glossary, below), for accurate biometric matching. With 3D Liveness Detectionyou can't even make a copyofyourbiometric data that wouldfool thesystem even if you wanted to.Liveness catches the copies by detecting generation loss, and only the genuine physical user can gainaccess.ISO/IEC 30107-3 - Presentation Attack Detection Standard: Circa2017https://www.iso.org/standard/67381.htmlis the International Organization for Standardization’s (ISO) testing guidance for evaluation of Anti-Spoofing technology, a.k.a.Presentation Attack Detection or "PAD". Four 30107documents have been published to date, including ISO/IEC 30107-4:2020, the precursor to ISO/IEC WD 30107-4 currently in progress.Released in 2017, ISO 30107-3 served as official guidance for determining if the subject of a biometric scan is alive. But since it allows PAD Checks to be compounded with Matching, it can produce confusing, invalid results. In 2020, with the introduction of deepfake puppets and other attack vectors not conceived of at the time of publication, ISO 30107-3 is now considered by many experts to be outdated and incomplete.Due to "hill-climbing" attacks (see Glossary, bottom of page), biometric systems should neverreveal which part of the system did or didn't "catch" aspoof.And, while ISO 30107-3 gets a lot right, it unfortunately does encouragetestingbothLiveness and Matching at the same time, where scientific method requires the fewest variables possible betested at once.Liveness testing should be done witha solely Boolean (true/false) response, and Liveness testingshould not allow systems to havemultiple decisionlayers thatcould allow anartifact to pass Liveness, but fail Matching because it didn't "look" enough like the enrolled subject. Why iBeta/Lab PAD Testing Isn'tEnough..."It Ain’t What You Don’t Know That Gets You Into Trouble. It’s What You Think You Know That Just Ain’t So." - Mark TwainIn our opinion, the iBeta PAD tests alone do not adequately represent the real-world threats a Liveness Detection System will face from hackers. Any 3rd-party testing is better than nothing, but taken at face value, iBeta tests provide a false sense of security due to being incomplete, too brief, having too much variationbetween vendors, and beingmuchTOOEASYto pass.Unfortunately, iBeta allows vendors to choose whatever device(s) they WANT TO USEfor thetest, and most choose a new modeldevicewith an 8-12MP camera. To put this in perspective, a 720p webcam isnot even 1MP, and the higher the quality/resolution of the camera sensor, the easier the testing is to pass.Even though most consumers and end users don't have access to the "pay-per-view" ISO 30107-3 Standard documentiBeta refuses to add disclaimers intheir Conformance Letters to warn customers & end-users that their PAD tests ONLY contain Presentation Attacks, and not attempts to bypass the camera/sensor. It is also unfortunate that ISO & iBeta both conflate Matching & Liveness into one unscientific testing protocol, making it impossible to know if the Liveness Detection is actually working as it should inscenarios where matching is included.PAD only means that iBeta testing only considers artifacts physically shown to a sensor. And even though injected digital attacks are the most scalable.iBetaDOES NOT TESTfor any type of Virtual Camera Attack, or Template Tampering as part oftheir PAD testing. So iBeta/Lab PAD testing, no matter what Level it is, isNEVERenough to ensure real-world security. As far as we know, iBeta hasnever offered any sensor bypass testing to any PAD vendor at anytime before this writing.iBetaindirectly allows vendors to influence the number of attacks in their time-based testing because some vendors have much longer session times than others.This means that by extending the time it takes for a session to be completed, the vendor can limitthe amount of attacks that can be performed in the allottedtime. The goal of biometric security testing is to expose vulnerabilities, and when the number of attacks, the devices, and the tester skill levels are non-standardized,it means the testing is NOT equally difficult between vendors, and/or isn't representativeof real-world threats.It's important to note that NO Level 3 testing is offered by iBetaany longer. It was offered for a few months under a "Level 3 Conformance," but then NIST notified iBeta that they didn't believe iBeta was capable of performing such important and difficult testing, and iBeta had to remove the Level 3 testing option. However, iBeta still listed Level 3 testing on their website for over a year.The editors of thissite believe that the lack of a disclaimerstating "Level 3 cannotbe tested by iBeta under its NIST accreditation" ISpurposefullymissing to attemptto make iBeta's testing menu look more complete and their lab more competent.Note: iBeta staff have recently stated publicly that it was their "business" decision not to perform Level 3 Testing, but this is false. On phone calls and in emails, iBeta staff repeatedly told editors of this website that iBeta was not able to perform Level 3 testing due toNIST's limitations. *This is disputed by iBeta.iBeta doesn't usually test the vendor's face Liveness Detection software in web browsers, only native smartphones, so numerous untested digital attack vectors exist even forsystems that pass PAD testing. Another huge red flag in iBeta's testing isthey allow as much as15% BPCER (Bonafide presentation classification error rate), which we call False Reject Rate (or FRR), enabling unscrupulous vendors to tighten security thresholds just to pass the test, but laterlower security in their real productwhencustomers experience poor usability. It has been verified in real-world testing that at least twovendors whoclaim 0% Presentation Attack (PA)Success Ratein iBeta testing have, inindependenttesting, been found to have over 4% Presentation Attack (PA)Success Rates.Note: iBeta DOES NOT require production version verificationor require the vendor sign an affidavit stating they will not lower security thresholds in production versions of their software.Remember, robust Liveness Detection mustcover all attack vectors, including digital attack vectors, so don't be fooled by an iBeta "Conformance" badge. While it's better than nothing, it's nowhere near enough. Make your vendor sign an affidavit saying they have not lowered security thresholds, demand theyprove they have undergone Penetration Testing forDigital Attack Vectors, and demand the vendor stand up a Spoof Bounty Programbefore they canearn your business.FaceTec's $200,000 Spoof Bounty ProgramInsist that your biometrics vendor maintain a persistent Spoof Bounty Program to ensure they are aware of and robust to any emerging threats, like deepfakes, video injection, or virtual-camera hijacking. As of this writing, the only Biometric Vendor with an active Spoof Bounty is FaceTec. Having rebuffed over 110,000 spoof attacks, the $200,000 Spoof Bounty Program's goal remains to uncover unknown vulnerabilities in the FaceTec Liveness AI and Security scheme. If any are found, they arepatched, and the security levels elevated even further. Visit bounty.facetec.comto participate.2022 ENISA Remote IdentityProofing GuidelinesThis report is the most recent and up to date analysis of threats to remote identity proofing systems. This report highlights attacks that are very viable, yet still aren't acknowledged by iBeta, or in the ISO 30107-3 PAD standard.These attack vectors include Level 4 & 5 attacks, like Deepfake Puppets and Video Injection.Get the 2022 ENISA Remote ID Proofing Guidelines - Download Here No Stored Liveness Data = No Honeypot RiskTwo types of data are required for every User Authentication: Face Data (for matching) and Liveness Data (to prove the Face Data wascollected from a live person).Liveness Data must be timestamped, be valid only for a few minutes, and then deleted.Only Face Data should ever be stored. New Liveness Data must be collected for every authentication attempt.Face Data should be encrypted and stored without the corresponding Liveness Data, so it does not create honeypot risk.Note:Think of the storedFace Data as the lock,the User's newly collectedFace Data as a One-Time-Usekey, and the Liveness Data asproofthatkey has never been usedbefore.Early Academic Papers AboutLiveness & Anti-SpoofingOne of the earliestpapers on Livenesswaspublished by Stephanie Schuckers, Ph.D., in 2002."Spoofing and anti-spoofing measures", and it is widely regarded as the foundation of today's academic body of work on the subject.The paperstates that "Liveness detection is based on recognition of physiological information as signs of life from liveness information inherent to the biometric".Laterin 2016, her follow-up, "Presentations and Attacks, and Spoofs, Oh My", continuedto influencepresentationattack detection research and testing.Ask The Editor:Is Facial Recognition the Same asLiveness & FacialVerification & Face Authentication?​No! And we need to start using the correct terminology if we want to stop confusing people about biometrics. Facial Recognition is for surveillance. It's the 1-to-N matching of images captured with cameras the userdoesn'tcontrol, like those in a casino or an airport. And it only provides "possible" matches for the surveilledperson from face photos stored in an existing database.Face Verification (1:1 Matching to a legal identity data source+Liveness), is commonly used in onboarding users to new accounts.The trusted data can come from an ID document, a Passport chip, or a Government photo database.Face Authentication (1:1 Matching+Liveness), on the other hand, takes user-initiated data collected from a device they do control andconfirms that user's identity for their own direct benefit, like, for example, secure account access.They may share a resemblance and even overlap in some ways, but don't lump the two together. Like any powerful tech, this is a double-edged sword; Facial Recognition is a threat to privacy while Facial Verification is a huge win for it.Ask The Editor:Should We Fear CentralizedFace Matching?Fearof biometric matching stems from thebelief that centralized storage ofbiometric data creates a "honeypot" that,ifbreached, compromises the securityof allother accounts that rely onthatsame biometric data.Biometric detractors argue,"You can reset your password if stolen, but you can't reset your face."While this is true,it is a failure of imagination to stop there.We must ask, "Whatwould make centralized biometric authenticationsafe?"The answer is strong Liveness Detection backed by a public spoof bounty program, that requires the user to provide new Liveness data every time they login. With this AIin place, the biometric honeypot is no longer something to fear because the security doesn't rely on our biometric data being kept secret, it relies on it being provided by our living selves.Learn more about how strong Liveness Makes Centralized Safe in this comprehensiveFindBiometrics white paper.Ask The Editor:Should Liveness Detection be required by law?We believe legislation must be passed to make strong Liveness Detection mandatory if biometrics are used for Remote Identity Proofing in KYC/AML regulated industries. The reality is, all of our personal data has already been breached, so we can no longer trust Knowledge-Based Authentication (KBA). We must now turn our focus from maintaining databases full of "secrets" to securing attack surfaces. For the public good, current laws already require organic foods to be Certified and that every medical drug must be tested and approved. In turn, governments worldwide should require strong Liveness Detection be employed to protect the biometric security and sensitive personal information of every citizen. Liveness AI should be tested by labs informed with the latest ID Proofing Guidelines from ENISA.Ask The Editor:Why doesn't 2D Face Matching work very well?We've all heard an actor say, "get my good side", and the best photographers know which distances and lenses make portrait photos the most flattering.This is becausea real 3D human face contains orders of magnitude more data than a typical 2D photo, and when a 3D face is flattened into a single 2D layer, depth data is lost and creates significant issues.In the real world, capture distance, camera position, and lens diameter play big parts in how well a derivative 2D photo represents the original 3D face.Source –Best Portrait Lens – Focal Length, Perspective & DistortionMatt Granger– Oct 27, 20172D Face Matching will not always “see” her as the same person.In some frames she might look more like her sister or her cousin and she could match one of themeven more highly than herself.In large datasets these visual differences are within the margin of error of the 2D algorithms and they make confidence in the 1:N match impossible.However, 3D FaceMaps not only provide more human signal for Liveness Detection, but they also providedata about the shape of the face which is combined with unique visual traits to increase accuracy, and enable the use of 1:N matching withsignificantlylarger datasets.​FaceTec's current 1:N Search for De-duplication provides an Elastic-FAR of 1/125M to 1/+1Bat only 2% FRR.Ask The Editor:What's the Problem With Text & Photo CAPTCHAs?CAPTCHA (wiki), an acronym for "Completely Automated Public Turing test to tell Computers and Humans Apart", is a simple challenge–response test used in computing to determine whether the user is human or a bot.In an article on TheVerge.com, Josh Dzieza writes, “Google pitted one of its machine learning algorithms against humans in solving the most distorted text CAPTCHAs: the computer got the test right 99.8-percent of the time, while the humans got a mere 33 percent.”Jason Polakis, a computer scientist, used off-the-shelf image recognition tools, including Google's own image search, to solve Google's image CAPTCHA with 70% accuracy, states“You need something that’s easy for an average human, it shouldn’t be bound to a specific subgroup of people, and it should be hard for computers at the same time.”Even without AI, services like:deathbycaptcha.comandanti-captcha.comallow spam bots tobypass the challenge–responses testsby using proxy humans to complete them.With so many people willing to do this workit's cheap to defeat at scale, and workers earn between$0.25-$0.60for every 1000 CAPTCHAs solved.(webemployed).Resources & WhitepapersEuropean Union Agency for Cybersecurity (ENISA) - Remote Identity Proofing - Attacks & Countermeasures -PublishedJanuary 20, 2022https://www.enisa.europa.eu/publications/remote-identity-proofing-attacks-countermeasuresInformation Security Magazine - Dorothy E. Denning's (wiki) 2001 article, “It Is "Liveness," Not Secrecy, That Counts”Acuity-https://www.acuitymi.com/product-page/copy-of-constellation-face-verification-amp-liveness-for-remote-digital-onboaFaceTec:Liveness Detection - Biometrics Final Frontier& FaceTec Reply to NIST 800-63 RFIGartner, “Presentation attack detection (PAD, a.k.a., “liveness testing”) is a key selection criterion.ISO/IEC 30107 “Information Technology — Biometric Presentation Attack Detection” was published in 2017.(Gartner’sMarket Guide for User Authentication, Analysts: Ant Allan, David Mahdi, Published: 26 November 2018). FaceTec’s ZoOm was cited in the report.For subscriber access:https://www.gartner.com/doc/3894073?ref=mrktg-srch.Forrester, "The State Of Facial Recognition For Authentication -Expedites Critical Identity Processes For Consumers And Employees"ByAndras Cser,Alexander Spiliotes,Merritt Maxim, withStephanie Balaouras,Madeline Cyr, Peggy Dostie.For subscriber access:https://www.forrester.com/report/The+State+Of+Facial+Recognition+For+Authentication+And+Verification/-/E-RES141491#Ghiani, L., Yambay, D.A., Mura, V., Marcialis, G.L., Roli, F. and Schuckers, S.A., 2017. Review of the Fingerprint Liveness Detection (LivDet) competition series: 2009 to 2015.Image and Vision Computing,58, pp.110-128:https://www.clarkson.edu/sites/default/files/2017-11/Fingerprint%20Liveness%20Detection%2009-15.pdfSchuckers, S., 2016. Presentations and attacks, and spoofs, oh my.Image and Vision Computing,55, pp.26-30:https://www.clarkson.edu/sites/default/files/2017-11/Presentations%20and%20Attacks.pdfSchuckers, S.A., 2002. Spoofing and anti-spoofing measures.Information Security technical report,7(4), pp.56-62:https://www.clarkson.edu/sites/default/files/2017-11/Spoofing%20and%20Anti-Spoofing%20Measures.pdfFaceTec's Official Reply to NIST 800-63 RFIhttps://facetec.com/NIST_800-63_RFI_FaceTec_Reply.pdfFaceTecDiscussion With ENISA on IDV Guidelines 2021https://facetec.com/ENISA_RFI_Remote_ID_Attacks_FaceTec_Countermeasures.pdfGlossary -Biometrics Industry & Testing Terms:1:1 (1-to-1) –Comparing the biometric data from a subject User to the biometric data stored for the expected User.If the biometric data does not match above the chosen FAR level, the result is a failed match.1:N (1-to-N) –Comparing the biometric data from one individual to the biometric data from a list of known individuals, the faces of the people on the list that look similar are returned. This is used for facial recognition surveillance, but can also be used to flag duplicate enrollments.Artifact (Artefact) –An inanimate objectthat seeks toreproducehuman biometric traits.Authentication – The concurrent Liveness Detection, 3D depth detection, and biometric data verification (i.e., face sharing) of the User.Bad Actor –A criminal; a person with intentions to commit fraud by deceiving others.Biometric –The measurement and comparison of data representing the uniquephysical traits of anindividual for the purposes of identifying that individual based on those unique traits.Certification –The testing of a system to verify its ability to meet or exceed a specified performance standard.iBeta used to issue certifications, but now they can only issue conformances.Complicit User Fraud –When a User pretends to have fraud perpetrated against them, but has been involved in a scheme to defraud by stealing an asset and trying to get it replaced by an institution.Cooperative User/Tester –When human Subjects used in the testsprovide any and all biometric data that is requested.This helps to assess the complicit User fraud and phishing risk, but only applies if the test includes matching (not recommended).Centralized Biometric –Biometric data is collected on any supported device, encrypted and sent to a server for enrollment and later authentication for that deviceorany other supported device. When the User’s original biometric data is stored on a secure 3rd-party server, that data can continue to be used as the source of trust and their identity can be established and verified at any time. Any supported device can be used to collect and send biometric data to the server for comparison, enabling Users to access their accounts from all of their devices, new devices, etc., just like with passwords. Liveness is the most critical component of a centralized biometric system, and because certified Liveness did not exist until recently, centralized biometrics have not yet been widely deployed.Credential Sharing –When two or more individuals do not keep their credentials secret and can access each others accounts. This can be done to subvert licensing fees or to trick an employer into paying for time not worked (also called “buddy punching”).Credential Stuffing – A cyberattack where stolen account credentials, usually comprising lists of usernames and/or email addresses and the corresponding passwords, are used to gain unauthorized user account access.Decentralized Biometric –When biometric data is captured and stored on a single device and the data never leaves thatdevice. Fingerprint readers in smartphones and Apple’s Face ID are examples of decentralized biometrics. They only unlock one specific device, they require re-enrollment onany new device, and further do not prove the identity of the User whatsoever.Decentralized biometric systems can be defeated easily if a bad actor knows the device's override PIN number, allowing them to overwrite the User’s biometric data with their own.End User– An individual human who is using an application.Enrollment –When biometric data is collected for the first time, encrypted and sent to the server.Note: Liveness must be verified and a 1:N check should be performed against all the other enrollments to check for duplicates.Face Authentication –Authentication has three parts: Liveness Detection, 3D Depth Detection and Identity Verification. All must be done concurrently on the same face frames.Face Matching –Newly captured images/biometric data of a person are compared to the enrolled (previously saved) biometric data of the expected User, determining if they are the same.Face Recognition –Images/biometric data of a person are compared against a large list of known individuals to determine if they are the same person.Face Verification –Matching the biometric data of the Subject User to the biometric data of the Expected User.FAR (False Acceptance Rate) –The probability that the system will accept an imposter’s biometric data as the correct User’s data and incorrectly provide access to the imposter.FIDO –Stands for Fast IDentity Online: A standards organization that provides guidance to organization that choose to use Decentralized Biometric Systems (https://fidoalliance.org).FRR/FNMR/FMR –The probability that a system will reject the correct User when that User’s biometric data is presented to the sensor.If the FRR is high, Users will befrustrated with the system because they are prevented from accessing their own accounts.Hill-Climbing Attack –Whenan attacker uses information returned by the biometric authenticator (match levelor liveness score)to learn howto curate their attacks and gain ahigher probability of spoofing the system.iBeta –A NIST-certified testing lab in Denver Colorado; the only lab currently certifying biometric systems for anti-spoofing/Liveness Detection to the ISO 30107-3 standard (ibeta.com).Identity & Access Management (IAM) –A framework of policies and technologies to ensure only authorized users have the appropriate access to restricted technology resources, services, physical locations and accounts. Also called identity management (IdM).Imposter –A living person with traits so similar to the Subject User that the system determines the biometric data is from the same person.ISO 30107-3 –The International Organization for Standardization’s testing guidance for evaluation of Anti-Spoofing technology (www.iso.org/standard/67381.html).Knowledge-Based Authentication (KBA) - Authentication method that seeks to prove the identity of someone accessing a digital service. KBA requires knowing a user's private information to prove that the person requesting access is the owner of the digital identity. Static KBA is based on a pre-agreed set of shared secrets. Dynamic KBA is based on questions generated from additional personal information.Liveness Detection or Liveness Verification –The ability for a biometric system to determine if data has been collectedfrom a live human or an inanimate, non-living Artifact.NIST –National Institute of Standards and Technology – The U.S. government agency that provides measurement science, standards, and technology to advance economic advantage in business and government (nist.gov).Phishing –When a User is tricked into giving a Bad Actor their passwords, PII, credentials, or biometric data. Example: A User gets a phone call from a fake customer service agent and they request the User’s password to a specific website.PII –Personally Identifiable Information isinformation that can be used on its own or with other information to identify, contact, or locate a single person, or to identify an individual in context (en.wikipedia.org/wiki/Personally_identifiable_information).Presentation Attack Detection (PAD) –A framework for detecting presentation attack events. Related to Liveness Detection and Anti-Spoofing.Root Identity Provider –An organization that stores biometric data appended to the corresponding personal information of individuals, and allows other organizations to verify the identities of Subject Users by providing biometric data to the Root Identity Provider for comparison.Selfie Matching - When a user provides their own biometric data to me compared to trusted data that they provided previously or is stored by any identity issuer. 2D Facial RecognitionAlgorithms are not well suited for Selfie Matching because 3D human facesvery different depending on the distance of the capture.Spoof –When a non-living object that exhibits some biometric traits is presented to a camera or biometric sensor. Photos, masks or dolls are examples of Artifacts used in spoofs.Subject User– The individual that is presenting their biometric data to the biometric sensor at that moment.Synthetic Identity - When a bad actor uses a combination of biometric data, name, social security number, address, etc. to create a new record for a person who doesn't actuallyexist, for the purposes of opening and using an account in that name.Editors & ContributorsKevin Alan TussyEditor-in-ChiefLinkedInJohn WojewidkaSenior EditorLinkedInJosh RoseTech EditorLinkedIn©2022, Liveness.com. All rights reserved.