Appearance
Advanced Usage
Here is an advanced guide for some optional methods that can make embedding the LIQA more seamless, and provide more control over the integration. Make sure to follow the Quick Start guide first.
Preload LIQA resources before LIQA session
The goal is to minimize the time the user sees the LIQA loading screen by preloading LIQA resources in advance.
You can achieve this by leveraging the preload API. The following code example illustrates the initiation of the Face preset preloading:
tsx
tsximport {
preload } from "SOURCE_URL_PROVIDED_BY_HAUT_AI/liqa.js" // Initiate the preloading of the "face" preset's resource in advance // to ensure LIQA is ready when the user encounters itpreload ({preset : "face" })
tsximport {
preload } from "SOURCE_URL_PROVIDED_BY_HAUT_AI/liqa.js" // Initiate the preloading of the "face" preset's resource in advance // to ensure LIQA is ready when the user encounters itpreload ({preset : "face" })
Recommendations for usage
- If your page contains some activity required before the LIQA session (e.g. filling a questionnaire/survey), it's best to execute the preloading before this user-time-consuming activity.
- Considering the tiny size of the
liqa.js
script (only 1.5Kb gzipped), it's safe to put the import of the preload function at the page's head without concern for its impact on the page's loading speed. - If the resources have not been fully loaded before the user starts interacting with LIQA, LIQA will efficiently reuse the previously downloaded chunks, avoiding unnecessary resource reloading.
Mirror image from front camera
It is a common pattern for camera applications to display the video from front device camera horizontally flipped (mirrored) to ease the user ineraction. It is also common to return the final image as "camera sees" (not mirrored).
LIQA adopts this UX pattern and returns not mirrored image from LIQA sessions that used the front camera. This means that the image retrieved from the ImageCapture
API looks flipped compared to what end-user saw during the interaction with LIQA.
To overwrite this default process, LIQA also provides methods to invert or re-apply mirroring to final image. You can achieve it by leveraging two ImageCapture
APIs: transform
and source
. The following code example illustrates how the transformation is applied to the captured image with a conditional logic based on source of the image:
ts
tsif (
capture .source === "front_camera")capture =capture .transform ({horizontalFlip : true }) constimageBlob = awaitcapture .blob ()
tsif (
capture .source === "front_camera")capture =capture .transform ({horizontalFlip : true }) constimageBlob = awaitcapture .blob ()
Recommendations for usage
- Please, note, that if the image from the front camera is mirrored (the code above is applied), the right side of the face on image would represent the left side of the face in reality. This should be taken into account when SaaS Face Metrics 2.0 are displayed per face area.
- The available sources for image capturing can be configured. Please, follow the Image Sources Customization Guideline.
Convert captured image to Base64-encoded string
The goal is to change the encoding format of the output image to better fit the upload
HautAI SaaS API.
By default, LIQA emits the captured image as Blob
, encoded with JPEG with 100% quality, using the blob
API. The following code example provides a utility function blobToBase64
that can convert data from Blob
to Base64-encoded string
:
ts
tsfunction
blobToBase64 (blob ) { return newPromise ((resolve ) => { constreader = newFileReader ()reader .onloadend = () =>resolve (reader .result )reader .readAsDataURL (blob ) }) }
tsfunction
blobToBase64 (blob ) { return newPromise ((resolve ) => { constreader = newFileReader ()reader .onloadend = () =>resolve (reader .result )reader .readAsDataURL (blob ) }) }
Recommendations for usage
Add the utility function from above to the mentioned in a Quick Start section handleImageCapture
event listener for the "capture"
event as follows:
ts
tsasync function
handleImageCapture (event ) { constcapture =event .detail constblob = awaitcapture .blob () // returns the captured image as Blob JPEG constbase64 = awaitblobToBase64 (blob ) // converts the captured image to Base64 string // ... app logic handling the captured image ... }
tsasync function
handleImageCapture (event ) { constcapture =event .detail constblob = awaitcapture .blob () // returns the captured image as Blob JPEG constbase64 = awaitblobToBase64 (blob ) // converts the captured image to Base64 string // ... app logic handling the captured image ... }
Access anonymized image processing pipeline
The anonymization feature allows you to access multiple image variants from the processing pipeline, enabling privacy-preserving workflows and advanced image analysis. This is particularly useful for applications that need to inspect different stages of image processing or maintain audit trails.
Enabling Anonymization
To enable anonymization, you must configure LIQA with the postprocessing
parameter set to "anonymized"
:
html
<hautai-liqa
license="xxx-xxx-xxx"
preset="face"
postprocessing="anonymized"
>
</hautai-liqa>
<hautai-liqa
license="xxx-xxx-xxx"
preset="face"
postprocessing="anonymized"
>
</hautai-liqa>
Or when using the JavaScript API:
ts
tsconst
liqa = newLiqa ({license : "xxx-xxx-xxx",preset : "face",postprocessing : "anonymized",target : "body" })
tsconst
liqa = newLiqa ({license : "xxx-xxx-xxx",preset : "face",postprocessing : "anonymized",target : "body" })
Available Options:
"original"
(default) – Returns the processed image without anonymization"anonymized"
– Enables anonymization and provides access to the full processing pipeline
For complete configuration details, see the postprocessing
parameter in the API Reference.
Face-180 Preset Support: This feature is available for the face-180 preset, providing anonymized image variants for each captured angle - front, left, and right sides of the face. When using face-180, you'll receive anonymized data for each of the three captured images.
Single Capture Example (Face preset):
ts
tsasync function
handleImageCapture (event ) { constcaptures =event .detail // For face preset, you'll typically get a single capture for (constcapture ofcaptures ) { // Get anonymized data including all processing stages constanonymized = awaitcapture .anonymized ()console .log (`Processing capture from ${capture .source }`) // Access different image variantsanonymized .blobs .forEach ((blobData ) => {console .log (`${blobData .type }:`,blobData .data ) // Process each image variant as needed }) // Access processing metadataconsole .log ('Transform sequence:',anonymized .transformSequence )console .log ('Face mesh data:',anonymized .mesh ) } }
tsasync function
handleImageCapture (event ) { constcaptures =event .detail // For face preset, you'll typically get a single capture for (constcapture ofcaptures ) { // Get anonymized data including all processing stages constanonymized = awaitcapture .anonymized ()console .log (`Processing capture from ${capture .source }`) // Access different image variantsanonymized .blobs .forEach ((blobData ) => {console .log (`${blobData .type }:`,blobData .data ) // Process each image variant as needed }) // Access processing metadataconsole .log ('Transform sequence:',anonymized .transformSequence )console .log ('Face mesh data:',anonymized .mesh ) } }
Multiple Capture Example (Face-180 preset):
ts
tsasync function
handleImageCapture (event ) { constcaptures =event .detail // Process each capture (face-180 provides 3 captures: front, left, right) for (constcapture ofcaptures ) { // Get anonymized data including all processing stages constanonymized = awaitcapture .anonymized ()console .log (`Processing ${capture .source } capture`) // Access different image variantsanonymized .blobs .forEach ((blobData ) => {console .log (`${blobData .type }:`,blobData .data ) // Process each image variant as needed }) // Access processing metadataconsole .log ('Transform sequence:',anonymized .transformSequence )console .log ('Face mesh data:',anonymized .mesh ) } }
tsasync function
handleImageCapture (event ) { constcaptures =event .detail // Process each capture (face-180 provides 3 captures: front, left, right) for (constcapture ofcaptures ) { // Get anonymized data including all processing stages constanonymized = awaitcapture .anonymized ()console .log (`Processing ${capture .source } capture`) // Access different image variantsanonymized .blobs .forEach ((blobData ) => {console .log (`${blobData .type }:`,blobData .data ) // Process each image variant as needed }) // Access processing metadataconsole .log ('Transform sequence:',anonymized .transformSequence )console .log ('Face mesh data:',anonymized .mesh ) } }
Available Image Variants
The anonymized()
method returns an object containing multiple image processing stages:
originalImage
– The unprocessed captured image as received from the camera or uploadimageRestored
– Image after initial restoration and enhancement processingsegmentationMask
– Binary mask showing detected face and skin regions for privacy processingcolorCorrectedImage
– Image after color correction algorithm application for consistencyanonymizedImage
– Final anonymized image with embedded metadata for traceability
Processing Metadata
In addition to image variants, you also get access to processing metadata:
transformSequence
– Array of transformations applied during the anonymization processmesh
– Face mesh coordinates and landmark data used for processing
EXIF Metadata
The anonymized image includes EXIF metadata containing the mesh and transform sequence data, providing full traceability of the anonymization process. This ensures that the processing pipeline can be audited and reproduced if needed.
Recommendations for usage
- Use the
segmentationMask
to understand which areas of the image were processed for privacy - The
originalImage
andanonymizedImage
pair allows for before/after comparisons - Store the
transformSequence
andmesh
data if you need to reproduce or validate the anonymization process - The color-corrected variant can be useful for applications requiring consistent image appearance across different lighting conditions
- For face-180 preset: Process each capture separately as they represent different angles (front, left, right) of the same person
- For face-180 preset: Consider the
capture.source
property to identify which angle each anonymized dataset corresponds to