COMING SOON: Fall 2024 Release
By Jussi Nieminen | 2020 Jul 03
In this article, you’ll learn how to automatically detect faces in a PDF document using the open-source face-api.js and permanently redact them using PDFTron WebViewer, a JavaScript PDF library.
Before we get started, you can try WebViewer's AI-powered redact faces demo.
This app will work in any modern browser and doesn’t rely on a server to detect faces or redact the PDF (the entire transaction occurs client-side in the browser).
Here’s the tool we’re going to build:
Sorry, your browser doesn't support embedded videos.
Working in Angular? Check out our specific guide for redaction with Angular in WebViewer.
To get up and running, first, we create our index.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Redact faces with PDFTron WebViewer</title>
<style>
body{
margin: 0;
padding: 0;
}
#viewer {
width: 100vw;
height: 100vh;
}
</style>
</head>
<body>
<!-- WebViewer will be included inside #viewer div -->
<div id="viewer"></div>
</body>
</html>
Next, we need an http server where we will serve our small app from. In my case, I'm going to use Live Server, but you can choose any http server.
To run the live server, install it through npm using the command npm install -g live-server
. Now you can start the Live Server with live-server .
If you prefer to avoid global npm installations, use npx which is bundled with npm. To start Live Server using npx, use the command npx live-server .
. You can then access the live server on your localhost http://127.0.0.1:8080 by default.
Now that we have our basic setup, we can add WebViewer to the page. You can download the demo version of WebViewer from https://dev.pdftron.com?platform=web. Once the download is complete and you have extracted the zip file, copy the full contents of the lib directory to the same directory where you have your index.html. Let's also create our main JavaScript file. Create a directory ./src
and add the main JavaScript file index.js
inside.
Next, we include these JavaScript files within the page by adding new <script>
tags under our <title>
tag.
<script defer src="/lib/webviewer.min.js"></script>
<script defer src="/src/index.js"></script>
If you now check the page, it will still be empty as we first need to initialize WebViewer to see it.
//@data {"m": true}//
WebViewer(
{
path: '/lib',
enableFilePicker: true,
},
document.getElementById('viewer')
).then(function(webViewerInstance){
const FitMode = webViewerInstance.UI.FitMode;
webViewerInstance.UI.setFitMode(FitMode.FitWidth);
});
//@data {"m": true}//
WebViewer(
{
path: '/lib',
enableFilePicker: true,
},
document.getElementById('viewer')
).then(function(webViewerInstance){
const FitMode = webViewerInstance.FitMode;
webViewerInstance.setFitMode(FitMode.FitWidth);
});
With the above, we have initialized WebViewer and told it that we want to use fit mode, which sets the document zoom level to the browser window’s width.
Now you should be able to open a document by clicking the menu icon in the top right corner and choosing open document. If you want to open a document as WebViewer loads, you can use the initiaDoc
property in WebViewer options.
Next, we will add a new custom button to the toolbar from which we can trigger the face recognition action. To start, let's add a new function and call it addRedactFacesButtonToHeader
.
function addRedactFacesButtonToHeader(webViewerInstance, onRedactFacesButtonClick) {
}
We will pass the WebViewer instance and a click handler that will be called when the user clicks the button. To add button, we use WebViewer instance setHeaderItems API.
//@data {"m": true}//
function addRedactFacesButtonToHeader(webViewerInstance, onRedactFacesButtonClick) {
webViewerInstance.UI.setHeaderItems(function setHeaderItemsCallback(header){
// button icon
const image = '<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 20 20"><path fill-rule="evenodd" d="M18 10a8 8 0 11-16 0 8 8 0 0116 0zm-6-3a2 2 0 11-4 0 2 2 0 014 0zm-2 4a5 5 0 00-4.546 2.916A5.986 5.986 0 0010 16a5.986 5.986 0 004.546-2.084A5 5 0 0010 11z" clip-rule="evenodd"></path></svg>';
const items = header.getItems()
const redactButton = {
type: 'actionButton',
img: image,
title: 'Redact faces',
onClick: onRedactFacesButtonClick,
};
// add button to header items
items.splice(10, 0, redactButton);
// update header
header.update(items);
});
}
//@data {"m": true}//
function addRedactFacesButtonToHeader(webViewerInstance, onRedactFacesButtonClick) {
webViewerInstance.setHeaderItems(function setHeaderItemsCallback(header){
// button icon
const image = '<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 20 20"><path fill-rule="evenodd" d="M18 10a8 8 0 11-16 0 8 8 0 0116 0zm-6-3a2 2 0 11-4 0 2 2 0 014 0zm-2 4a5 5 0 00-4.546 2.916A5.986 5.986 0 0010 16a5.986 5.986 0 004.546-2.084A5 5 0 0010 11z" clip-rule="evenodd"></path></svg>';
const items = header.getItems()
const redactButton = {
type: 'actionButton',
img: image,
title: 'Redact faces',
onClick: onRedactFacesButtonClick,
};
// add button to header items
items.splice(10, 0, redactButton);
// update header
header.update(items);
});
}
In the code above, we first create an icon image for our button. Then we get the items array from the header and add our new button to the array. Next, we use Array.splice()
to add our button as the first item on the right hand side of the action bar. For this example, we’ll add our button as the 10th item on the bar. Lastly, after updating the items array, we update our header with the modified items array.
To make the button visible, we still need to call the addRedactFacesButtonToHeader
function when the WebViewer initializes.
//@data {"m": true}//
WebViewer(
{
path: '/lib',
enableFilePicker: true,
},
document.getElementById('viewer')
).then(function(webViewerInstance){
const FitMode = webViewerInstance.UI.FitMode;
webViewerInstance.UI.setFitMode(FitMode.FitWidth);
const onRedactFacesButtonClick = function(){ console.log('onRedactFacesButtonClick click') }
addRedactFacesButtonToHeader(webViewerInstance, onRedactFacesButtonClick)
});
//@data {"m": true}//
WebViewer(
{
path: '/lib',
enableFilePicker: true,
},
document.getElementById('viewer')
).then(function(webViewerInstance){
const FitMode = webViewerInstance.FitMode;
webViewerInstance.setFitMode(FitMode.FitWidth);
const onRedactFacesButtonClick = function(){ console.log('onRedactFacesButtonClick click') }
addRedactFacesButtonToHeader(webViewerInstance, onRedactFacesButtonClick)
});
(We will add a placeholder that logs the button click, but we will change that later.)
You should now see our custom button on the top bar, which you should be able to click to see messages logged to the console.
For facial recognition, we will use face-api.js. You could use other libraries instead of face-api.js as well with minimal changes.
First, we need to get a few files from the face-api.js. Copy dist/face-api.min.js to the same directory where index.html is located. Next, we'll get face recognition models from the /weights directory. We will use the SSD Mobilenet V1
model, so copy ssd_mobilenetv1_model-weights_manifest.json, ssd_mobilenetv1_model-shard, and ssd_mobilenetv1_model-shard2 and place them in a new directory inside your project ./models
. You can experiment with other models as well. But for the rest of this project we will stick to this model.
To include face-api.js, we need to add it to the html and load the model with ssdMobilenetv1.loadFromUri
<script defer src="/lib/webviewer.min.js"></script>
<script defer src="/face-api.min.js"></script>
<script defer src="/src/index.js"></script>
//@data {"m": true}//
// Load face-api.js model
faceapi.nets.ssdMobilenetv1.loadFromUri('/models');
WebViewer(
{
path: '/lib',
enableFilePicker: true,
},
document.getElementById('viewer')
).then(function(webViewerInstance){
const FitMode = webViewerInstance.UI.FitMode;
webViewerInstance.UI.setFitMode(FitMode.FitWidth);
const onRedactFacesButtonClick = function(){ console.log('onRedactFacesButtonClick button was clicked') }
addRedactFacesButtonToHeader(webViewerInstance, onRedactFacesButtonClick)
});
//@data {"m": true}//
// Load face-api.js model
faceapi.nets.ssdMobilenetv1.loadFromUri('/models');
WebViewer(
{
path: '/lib',
enableFilePicker: true,
},
document.getElementById('viewer')
).then(function(webViewerInstance){
const FitMode = webViewerInstance.FitMode;
webViewerInstance.setFitMode(FitMode.FitWidth);
const onRedactFacesButtonClick = function(){ console.log('onRedactFacesButtonClick button was clicked') }
addRedactFacesButtonToHeader(webViewerInstance, onRedactFacesButtonClick)
});
Once we have face-api.js set, we can include it in WebViewer. First thing we will do is get the document from WebViewer and loop over every page that the document has. This will be added to the click handler of our custom button. Let's add this functionality.
We need to be able to access the WebViewer instance inside this click handler, so first we create a function that encloses the instance in closure. We’ll call this function onRedactFacesButtonClickFactory
, which will return theaction click handler function.
function onRedactFacesButtonClickFactory(webViewerInstance){
return async function onRedactFacesButtonClick(){
}
}
Now that we have this skeleton, we can add the logic of getting the document and looping over all the pages it has.
//@data {"m": true}//
function onRedactFacesButtonClickFactory(webViewerInstance){
return async function onRedactFacesButtonClick(){
// get document from WebViewer
const document = webViewerInstance.Core.documentViewer.getDocument();
// get page count of the document
const numberOfPages = document.getPageCount();
for(let pageNumber=1; pageNumber <= numberOfPages; pageNumber++){
// loop over the pages. In next phase we will add logic here
}
}
}
//@data {"m": true}//
function onRedactFacesButtonClickFactory(webViewerInstance){
return async function onRedactFacesButtonClick(){
// get document from WebViewer
const document = webViewerInstance.docViewer.getDocument();
// get page count of the document
const numberOfPages = document.getPageCount();
for(let pageNumber=1; pageNumber <= numberOfPages; pageNumber++){
// loop over the pages. In next phase we will add logic here
}
}
}
Next, we need to create a function through the factory and assign it to our custom button.
const onRedactFacesButtonClick = onRedactFacesButtonClickFactory(webViewerInstance);
addRedactFacesButtonToHeader(webViewerInstance, onRedactFacesButtonClick)
Now we can loop over document pages on a click of our custom button. We'll add the first part of the face detection by creating a new function for detecting faces, called detectAndRedactFacesFromPage
. We also pass the WebViewer instance and page number as these are needed when we detect faces.
function detectAndRedactFacesFromPage(webViewerInstance, pageNumber){
}
Face-api.js works with images, so the first thing we will do is to convert the page to an image using loadCanvasAsync API.
//@data {"m": true}//
function detectAndRedactFacesFromPage(webViewerInstance, pageNumber){
return new Promise(function(resolve, reject){
const pageIndex = pageNumber - 1;
const doc = webViewerInstance.Core.documentViewer.getDocument();
doc.loadCanvasAsync({
pageIndex,
zoom: 0.5, // Scale page size down to allow faster image processing
drawComplete: function drawComplete(canvas) {
resolve();
}
});
});
}
//@data {"m": true}//
function detectAndRedactFacesFromPage(webViewerInstance, pageNumber){
return new Promise(function(resolve, reject){
const pageIndex = pageNumber - 1;
const doc = webViewerInstance.docViewer.getDocument();
doc.loadCanvasAsync({
pageIndex,
zoom: 0.5, // Scale page size down to allow faster image processing
drawComplete: function drawComplete(canvas) {
resolve();
}
});
});
}
We wrap our loadCanvasAsync
to Promise to make it easier to call this function from outside. Notice that we also use 0.5 zoom here. This makes the conversion faster as we don't need a full resolution image for face recognition. You can try different zoom values between 0 and 1 to find the optimal value for your use case.
To call this function, we will add it inside our page loop.
//@data {"m": true}//
function onRedactFacesButtonClickFactory(webViewerInstance){
return async function onRedactFacesButtonClick(){
// get document from WebViewer
const document = webViewerInstance.Core.documentViewer.getDocument();
// get page count of the document
const numberOfPages = document.getPageCount();
for(let pageNumber=1; pageNumber <= numberOfPages; pageNumber++){
await detectAndRedactFacesFromPage(webViewerInstance, pageNumber);
}
}
}
//@data {"m": true}//
function onRedactFacesButtonClickFactory(webViewerInstance){
return async function onRedactFacesButtonClick(){
// get document from WebViewer
const document = webViewerInstance.docViewer.getDocument();
// get page count of the document
const numberOfPages = document.getPageCount();
for(let pageNumber=1; pageNumber <= numberOfPages; pageNumber++){
await detectAndRedactFacesFromPage(webViewerInstance, pageNumber);
}
}
}
Now we can convert our page to canvas, which will convert it to image for use in face-api.js
First, we create a small helper function for canvas to image conversion. We can only return an image once it is properly loaded. Thus we wrap it in a Promise.
function convertCanvasToImage(canvas){
return new Promise(function(resolve){
const base64ImageDataURL = canvas.toDataURL('image/jpeg');
const image = new Image()
image.onload = () => {
// resolve image once it is fully loaded
resolve(image)
}
image.src = base64ImageDataURL;
});
}
Next, we use convertCanvasToImage
in our detectAndRedactFacesFromPage
function
//@data {"m": true}//
function detectAndRedactFacesFromPage(webViewerInstance, pageNumber){
return new Promise(function(resolve, reject){
const pageIndex = pageNumber - 1;
const doc = webViewerInstance.Core.documentViewer.getDocument();
doc.loadCanvasAsync({
pageIndex,
zoom: 0.5, // Scale page size down to allow faster image processing
drawComplete: function drawComplete(canvas) {
convertCanvasToImage(canvas).then(async (image) => {
resolve();
});
}
});
});
}
//@data {"m": true}//
function detectAndRedactFacesFromPage(webViewerInstance, pageNumber){
return new Promise(function(resolve, reject){
const pageIndex = pageNumber - 1;
const doc = webViewerInstance.docViewer.getDocument();
doc.loadCanvasAsync({
pageIndex,
zoom: 0.5, // Scale page size down to allow faster image processing
drawComplete: function drawComplete(canvas) {
convertCanvasToImage(canvas).then(async (image) => {
resolve();
});
}
});
});
}
Finally, we can do facial recognition using the converted image. To do that, we use the detectAllFaces API.
//@data {"m": true}//
function detectAndRedactFacesFromPage(webViewerInstance, pageNumber){
return new Promise(function(resolve, reject){
const pageIndex = pageNumber - 1;
const doc = webViewerInstance.Core.documentViewer.getDocument();
doc.loadCanvasAsync({
pageIndex,
zoom: 0.5, // Scale page size down to allow faster image processing
drawComplete: function drawComplete(canvas) {
convertCanvasToImage(canvas).then(async (image) => {
const detections = await faceapi.detectAllFaces(image, new faceapi.SsdMobilenetv1Options({
minConfidence: 0.50,
maxResults: 100
}));
resolve();
});
}
});
});
}
//@data {"m": true}//
function detectAndRedactFacesFromPage(webViewerInstance, pageNumber){
return new Promise(function(resolve, reject){
const pageIndex = pageNumber - 1;
const doc = webViewerInstance.docViewer.getDocument();
doc.loadCanvasAsync({
pageIndex,
zoom: 0.5, // Scale page size down to allow faster image processing
drawComplete: function drawComplete(canvas) {
convertCanvasToImage(canvas).then(async (image) => {
const detections = await faceapi.detectAllFaces(image, new faceapi.SsdMobilenetv1Options({
minConfidence: 0.50,
maxResults: 100
}));
resolve();
});
}
});
});
}
Here we pass our image to detectAllFaces
API and tell it to use the SSD Mobilenet V1
algorithm. minConfidence
lets us set a minimum threshold to include only those faces that the algorithm is 50% sure are a face. You can play with this value and see which works best for you. By default, the maximum amount of faces face-api.js can detect is 100. If your document has a lot of faces per page, then you can increase the maxResults
property.
As we used scaled images for recognition, we still need to resize the detection coordinates to match our original page with theface-api.js and the resizeResults API. We can get the original size of the page from the document using the getPageInfo
function and by passing the size object to the resizeResults
function.
// get original page size
const pageInfo = doc.getPageInfo(pageIndex);
const displaySize = { width: pageInfo.width, height: pageInfo.height }
//@data {"m": true}//
function detectAndRedactFacesFromPage(webViewerInstance, pageNumber){
return new Promise(function(resolve, reject){
const pageIndex = pageNumber - 1;
const doc = webViewerInstance.Core.documentViewer.getDocument();
const pageInfo = doc.getPageInfo(pageIndex);
const displaySize = { width: pageInfo.width, height: pageInfo.height }
doc.loadCanvasAsync({
pageIndex,
zoom: 0.5, // Scale page size down to allow faster image processing
drawComplete: function drawComplete(canvas) {
convertCanvasToImage(canvas).then(async (image) => {
const detections = await faceapi.detectAllFaces(image, new faceapi.SsdMobilenetv1Options({
minConfidence: 0.50,
maxResults: 100
}));
// and pass displaySize to resizeResults
const resizedDetections = faceapi.resizeResults(detections, displaySize);
resolve();
});
}
});
});
}
//@data {"m": true}//
function detectAndRedactFacesFromPage(webViewerInstance, pageNumber){
return new Promise(function(resolve, reject){
const pageIndex = pageNumber - 1;
const doc = webViewerInstance.docViewer.getDocument();
const pageInfo = doc.getPageInfo(pageIndex);
const displaySize = { width: pageInfo.width, height: pageInfo.height }
doc.loadCanvasAsync({
pageIndex,
zoom: 0.5, // Scale page size down to allow faster image processing
drawComplete: function drawComplete(canvas) {
convertCanvasToImage(canvas).then(async (image) => {
const detections = await faceapi.detectAllFaces(image, new faceapi.SsdMobilenetv1Options({
minConfidence: 0.50,
maxResults: 100
}));
// and pass displaySize to resizeResults
const resizedDetections = faceapi.resizeResults(detections, displaySize);
resolve();
});
}
});
});
}
We now have full facial recognition working for our document! Next we will redact faces from the document. We'll start this by adding a new function which takes the WebViewer instance, page number, and the detected faces array as arguments.
function createFaceRedactionAnnotation(webViewerInstance, pageNumber, faceDetections){
}
We will call our new createFaceRedactionAnnotation
function after we have detected faces. So for now, our drawComplete
function should look like this:
function drawComplete(canvas) {
convertCanvasToImage(canvas).then(async (image) => {
const detections = await faceapi.detectAllFaces(image, new faceapi.SsdMobilenetv1Options({
minConfidence: 0.40,
maxResults: 300
}));
// As we scaled our image, we need to resize faces back to the original page size
const resizedDetections = faceapi.resizeResults(detections, displaySize);
createFaceRedactionAnnotation(webViewerInstance, pageNumber, resizedDetections)
resolve();
});
}
To be able to use the RedactionAnnotation API, we first need to enable them on WebViewer instance configuration using the enableRedaction
property. RedactionAnnotation requires access to the full API, which we enable by setting the property fullAPI
to ‘true’.
//@data {"m": true}//
WebViewer(
{
path: '/lib',
fullAPI: true,
enableRedaction: true,
enableFilePicker: true,
},
document.getElementById('viewer')
).then(function(webViewerInstance){
const FitMode = webViewerInstance.UI.FitMode;
webViewerInstance.UI.setFitMode(FitMode.FitWidth);
const onRedactFacesButtonClick = onRedactFacesButtonClickFactory(webViewerInstance);
addRedactFacesButtonToHeader(webViewerInstance, onRedactFacesButtonClick)
});
//@data {"m": true}//
WebViewer(
{
path: '/lib',
fullAPI: true,
enableRedaction: true,
enableFilePicker: true,
},
document.getElementById('viewer')
).then(function(webViewerInstance){
const FitMode = webViewerInstance.FitMode;
webViewerInstance.setFitMode(FitMode.FitWidth);
const onRedactFacesButtonClick = onRedactFacesButtonClickFactory(webViewerInstance);
addRedactFacesButtonToHeader(webViewerInstance, onRedactFacesButtonClick)
});
To start creating RedactionAnnotation, we first need to make sure there is at least 1 face detected on the page, and we will create quads for every detected face. We could create a separate RedactionAnnotation for each face, but this will lead to slow performance where there are many faces on a single page. So instead we create one RedactionAnnotation containing multiple Quads per page.
//@data {"m": true}//
function createFaceRedactionAnnotation(webViewerInstance, pageNumber, faceDetections){
if(faceDetections && faceDetections.length > 0){
const { Annotations, annotationManager } = webViewerInstance.Core;
const quads = faceDetections.map((detection) => {
const x = detection.box.x;
const y = detection.box.y;
const width = detection.box.width;
const height = detection.box.height;
const topLeft = [x, y];
const topRight = [x + width, y];
const bottomLeft = [x, y + height];
const bottomRight = [x + width, y + height];
// Quad is defined as points going from bottom left -> bottom right -> top right -> top left
return new Annotations.Quad(...bottomLeft, ...bottomRight, ...topRight, ...topLeft);
});
}
}
//@data {"m": true}//
function createFaceRedactionAnnotation(webViewerInstance, pageNumber, faceDetections){
if(faceDetections && faceDetections.length > 0){
const { Annotations, annotManager } = webViewerInstance;
const quads = faceDetections.map((detection) => {
const x = detection.box.x;
const y = detection.box.y;
const width = detection.box.width;
const height = detection.box.height;
const topLeft = [x, y];
const topRight = [x + width, y];
const bottomLeft = [x, y + height];
const bottomRight = [x + width, y + height];
// Quad is defined as points going from bottom left -> bottom right -> top right -> top left
return new Annotations.Quad(...bottomLeft, ...bottomRight, ...topRight, ...topLeft);
});
}
}
To create a quad, we can get faces coordinates from the detection box, and with those coordinates, we can create all corners of the Quad. Once we have all Quads defined, it is easy to just create RedactionAnnotation and ask annotManager
to add it using addAnnotation.
//@data {"m": true}//
function createFaceRedactionAnnotation(webViewerInstance, pageNumber, faceDetections){
if(faceDetections && faceDetections.length > 0){
const { Annotations, annotationManager } = webViewerInstance.Core;
const quads = faceDetections.map((detection) => {
const x = detection.box.x;
const y = detection.box.y;
const width = detection.box.width;
const height = detection.box.height;
const topLeft = [x, y];
const topRight = [x + width, y];
const bottomLeft = [x, y + height];
const bottomRight = [x + width, y + height];
// Quad is defined as points going from bottom left -> bottom right -> top right -> top left
return new Annotations.Quad(...bottomLeft, ...bottomRight, ...topRight, ...topLeft);
});
const faceAnnotation = new Annotations.RedactionAnnotation({
Quads: quads,
});
faceAnnotation.Author = annotationManager.getCurrentUser();
faceAnnotation.PageNumber = pageNumber;
faceAnnotation.StrokeColor = new Annotations.Color(255, 0, 0, 1);
annotationManager.addAnnotation(faceAnnotation, false);
// Annotation needs to be redrawn so that it becomes visible immediately rather than on next time page is refreshed
annotationManager.redrawAnnotation(faceAnnotation);
}
}
//@data {"m": true}//
function createFaceRedactionAnnotation(webViewerInstance, pageNumber, faceDetections){
if(faceDetections && faceDetections.length > 0){
const { Annotations, annotManager } = webViewerInstance;
const quads = faceDetections.map((detection) => {
const x = detection.box.x;
const y = detection.box.y;
const width = detection.box.width;
const height = detection.box.height;
const topLeft = [x, y];
const topRight = [x + width, y];
const bottomLeft = [x, y + height];
const bottomRight = [x + width, y + height];
// Quad is defined as points going from bottom left -> bottom right -> top right -> top left
return new Annotations.Quad(...bottomLeft, ...bottomRight, ...topRight, ...topLeft);
});
const faceAnnotation = new Annotations.RedactionAnnotation({
Quads: quads,
});
faceAnnotation.Author = annotManager.getCurrentUser();
faceAnnotation.PageNumber = pageNumber;
faceAnnotation.StrokeColor = new Annotations.Color(255, 0, 0, 1);
annotManager.addAnnotation(faceAnnotation, false);
// Annotation needs to be redrawn so that it becomes visible immediately rather than on next time page is refreshed
annotManager.redrawAnnotation(faceAnnotation);
}
}
We leave RedactionAnnotation in the document in a draft state, where they still need to be applied by the user. If you would like to apply these annotations automatically, you can use the annotManager applyRedactions API.
Now we have full facial recognition and redact working. To make processing more user-friendly, however, we will need to add a progress indicator.
First, we will add 2 new files progress.js
and progress.css
and add them to our ./src
directory. Include the following two files in index.html
.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Redact faces with PDFTron WebViewer</title>
<script defer src="/lib/webviewer.min.js"></script>
<script defer src="/face-api.min.js"></script>
<script defer src="/src/progress.js"></script>
<script defer src="/src/index.js"></script>
<style>
body{
margin: 0;
padding: 0;
}
#viewer {
width: 100vw;
height: 100vh;
}
</style>
<link type="text/css" rel="stylesheet" href="src/progress.css">
</head>
<body>
<div id="viewer"></div>
</body>
</html>
For progress.css
, add following styles
#redact-progress-container{
position: relative;
}
#redact-progress-container.visible{
display: flex;
}
#redact-progress-container.hidden{
display: none;
}
.redact-progress {
position: absolute;
top:0;
left: 0;
}
.white-out{
position: absolute;
top:0;
left: 0;
width: 100vw;
height: 100vh;
background-color: rgba(255,255,255,.8);
}
.redact-progress-content{
position: absolute;
top:0;
left: 0;
width: 100vw;
height: 100vh;
display: flex;
justify-content: center;
align-items: center;
}
For progress.js
, add following code.
const template = `
<div class="redact-progress">
<div class="white-out"></div>
<div class="redact-progress-content"></div>
</div>
`;
/**
* Create Dom elements that are used for displaying face detection progress
*/
function addProgressContainerToDom(){
const viewerDomElement = document.querySelector('#viewer');
const existingProgressContainer = viewerDomElement.querySelector('#redact-progress-container');
if(existingProgressContainer){
viewerDomElement.removeChild(existingProgressContainer);
}
const redactProgressContainerDiv = document.createElement('div');
redactProgressContainerDiv.setAttribute('id', 'redact-progress-container');
redactProgressContainerDiv.classList.add('hidden')
redactProgressContainerDiv.innerHTML = template;
viewerDomElement.insertBefore(redactProgressContainerDiv, viewerDomElement.firstChild);
}
/**
* Creates custom HTML component that shows progress of face detection
*
* @param {number} totalNumberOfPages Total number of pages in the document
* @returns {{showProgress: function, hideProgress: function, sendPageProcessing: function}} Object containing functions that control progress
*/
function createProgress(totalNumberOfPages){
let processedSoFar = 0;
const pageProcessedEventType = 'page-processed';
addProgressContainerToDom();
const progressContainer = document.querySelector('#redact-progress-container');
const progressContent = document.querySelector('.redact-progress-content');
// creating custom event listener for listening page processed events
progressContent.addEventListener(pageProcessedEventType, (e) => {
processedSoFar++;
progressContent.innerHTML = `Detecting faces from page ${processedSoFar} / ${totalNumberOfPages}`;
});
function sendPageProcessing(){
const pageProcessedEvent = new CustomEvent(pageProcessedEventType);
progressContent.dispatchEvent(pageProcessedEvent)
}
function showProgress(){
progressContainer.classList.remove('hidden');
progressContainer.classList.add('visible');
}
function hideProgress(){
progressContainer.classList.remove('visible');
progressContainer.classList.add('hidden');
}
return {
showProgress,
hideProgress,
sendPageProcessing
}
}
Code here is basic HTML manipulation, so we are not covering it here in full details. This code will add a modal layer on top of the page that can be controlled with the showProgress
and hideProgress
functions. To update page information, the progress component will return a sendPageProcessing
function.
Next, we will add this logic to our button handler function onRedactFacesButtonClickFactory
.
//@data {"m": true}//
function onRedactFacesButtonClickFactory(webViewerInstance){
return async function onRedactFacesButtonClick(){
// get document from WebViewer
const document = webViewerInstance.Core.documentViewer.getDocument();
// get page count of the document
const numberOfPages = document.getPageCount();
const { sendPageProcessing, showProgress, hideProgress } = createProgress(numberOfPages)
showProgress();
for(let pageNumber=1; pageNumber <= numberOfPages; pageNumber++){
sendPageProcessing();
await detectAndRedactFacesFromPage(webViewerInstance, pageNumber);
}
hideProgress()
}
}
//@data {"m": true}//
function onRedactFacesButtonClickFactory(webViewerInstance){
return async function onRedactFacesButtonClick(){
// get document from WebViewer
const document = webViewerInstance.docViewer.getDocument();
// get page count of the document
const numberOfPages = document.getPageCount();
const { sendPageProcessing, showProgress, hideProgress } = createProgress(numberOfPages)
showProgress();
for(let pageNumber=1; pageNumber <= numberOfPages; pageNumber++){
sendPageProcessing();
await detectAndRedactFacesFromPage(webViewerInstance, pageNumber);
}
hideProgress()
}
}
This function will create a progress indicator by calling createProgress
, which returns the functions we need to display and update the progress modal. Before we start processing pages, we call showProgress()
to show our progress information. And before starting facial recognition,we call sendPageProcessing()
to update the progress. Finally, once all pages are processed, we call hideProgress()
to remove progress information and show the document.
That's it!
You can find full source code from https://github.com/PDFTron/webviewer-facial-redaction-sample.
If you’d like to also automatically redact text, also have a look at our Automating Document Redaction in a Web App blog.
As you can see, automatically detecting faces and redacting them from PDFs using JavaScript isn’t too complicated when using WebViewer and an open source toolkit like face-api.js.
WebViewer can also be used to extend your app with even more unique client-side document functionality:
Get started with WebViewer and let us know what you build!
We hope you found this article helpful! If you have any questions or comments, don’t hesitate to contact us.
Jussi Nieminen
Share this post
PRODUCTS
Enterprise
Small Business
Popular Content