Skip to content

Advanced Usage

Here is an advanced guide for some optional methods that can make embedding the LIQA more seamless, and provide more control over the integration. Make sure to follow the Quick Start guide first.

Preload LIQA resources before LIQA session

The goal is to minimize the time the user sees the LIQA loading screen by preloading LIQA resources in advance.

You can achieve this by leveraging the preload API. The following code example illustrates the initiation of the Face preset preloading:

tsx
tsx
import { preload } from "SOURCE_URL_PROVIDED_BY_HAUT_AI/liqa.js"   // Initiate the preloading of the "face" preset's resource in advance // to ensure LIQA is ready when the user encounters it preload({ preset: "face" })
tsx
import { preload } from "SOURCE_URL_PROVIDED_BY_HAUT_AI/liqa.js"   // Initiate the preloading of the "face" preset's resource in advance // to ensure LIQA is ready when the user encounters it preload({ preset: "face" })

Recommendations for usage

  • If your page contains some activity required before the LIQA session (e.g. filling a questionnaire/survey), it's best to execute the preloading before this user-time-consuming activity.
  • Considering the tiny size of the liqa.js script (only 1.5Kb gzipped), it's safe to put the import of the preload function at the page's head without concern for its impact on the page's loading speed.
  • If the resources have not been fully loaded before the user starts interacting with LIQA, LIQA will efficiently reuse the previously downloaded chunks, avoiding unnecessary resource reloading.

Mirror image from front camera

It is a common pattern for camera applications to display the video from front device camera horizontally flipped (mirrored) to ease the user ineraction. It is also common to return the final image as "camera sees" (not mirrored).

LIQA adopts this UX pattern and returns not mirrored image from LIQA sessions that used the front camera. This means that the image retrieved from the ImageCapture API looks flipped compared to what end-user saw during the interaction with LIQA.

To overwrite this default process, LIQA also provides methods to invert or re-apply mirroring to final image. You can achieve it by leveraging two ImageCapture APIs: transform and source. The following code example illustrates how the transformation is applied to the captured image with a conditional logic based on source of the image:

ts
ts
if (capture.source === "front_camera") capture = capture.transform({ horizontalFlip: true })   const imageBlob = await capture.blob()
ts
if (capture.source === "front_camera") capture = capture.transform({ horizontalFlip: true })   const imageBlob = await capture.blob()

Recommendations for usage

  • Please, note, that if the image from the front camera is mirrored (the code above is applied), the right side of the face on image would represent the left side of the face in reality. This should be taken into account when SaaS Face Metrics 2.0 are displayed per face area.
  • The available sources for image capturing can be configured. Please, follow the Image Sources Customization Guideline.

Convert captured image to Base64-encoded string

The goal is to change the encoding format of the output image to better fit the upload HautAI SaaS API.

By default, LIQA emits the captured image as Blob, encoded with JPEG with 100% quality, using the blob API. The following code example provides a utility function blobToBase64 that can convert data from Blob to Base64-encoded string:

ts
ts
function blobToBase64(blob) { return new Promise((resolve) => { const reader = new FileReader() reader.onloadend = () => resolve(reader.result) reader.readAsDataURL(blob) }) }
ts
function blobToBase64(blob) { return new Promise((resolve) => { const reader = new FileReader() reader.onloadend = () => resolve(reader.result) reader.readAsDataURL(blob) }) }

Recommendations for usage

Add the utility function from above to the mentioned in a Quick Start section handleImageCapture event listener for the "capture" event as follows:

ts
ts
async function handleImageCapture(event) { const capture = event.detail   const blob = await capture.blob() // returns the captured image as Blob JPEG   const base64 = await blobToBase64(blob) // converts the captured image to Base64 string   // ... app logic handling the captured image ... }
ts
async function handleImageCapture(event) { const capture = event.detail   const blob = await capture.blob() // returns the captured image as Blob JPEG   const base64 = await blobToBase64(blob) // converts the captured image to Base64 string   // ... app logic handling the captured image ... }

Access anonymized image processing pipeline

The anonymization feature allows you to access multiple image variants from the processing pipeline, enabling privacy-preserving workflows and advanced image analysis. This is particularly useful for applications that need to inspect different stages of image processing or maintain audit trails.

Enabling Anonymization

To enable anonymization, you must configure LIQA with the postprocessing parameter set to "anonymized":

html
<hautai-liqa
  license="xxx-xxx-xxx"
  preset="face"
  postprocessing="anonymized"
>
</hautai-liqa>
<hautai-liqa
  license="xxx-xxx-xxx"
  preset="face"
  postprocessing="anonymized"
>
</hautai-liqa>

Or when using the JavaScript API:

ts
ts
const liqa = new Liqa({ license: "xxx-xxx-xxx", preset: "face", postprocessing: "anonymized", target: "body" })
ts
const liqa = new Liqa({ license: "xxx-xxx-xxx", preset: "face", postprocessing: "anonymized", target: "body" })

Available Options:

  • "original" (default) – Returns the processed image without anonymization
  • "anonymized" – Enables anonymization and provides access to the full processing pipeline

For complete configuration details, see the postprocessing parameter in the API Reference.

Face-180 Preset Support: This feature is available for the face-180 preset, providing anonymized image variants for each captured angle - front, left, and right sides of the face. When using face-180, you'll receive anonymized data for each of the three captured images.

Single Capture Example (Face preset):

ts
ts
async function handleImageCapture(event) { const captures = event.detail   // For face preset, you'll typically get a single capture for (const capture of captures) { // Get anonymized data including all processing stages const anonymized = await capture.anonymized() console.log(`Processing capture from ${capture.source}`) // Access different image variants anonymized.blobs.forEach((blobData) => { console.log(`${blobData.type}:`, blobData.data) // Process each image variant as needed }) // Access processing metadata console.log('Transform sequence:', anonymized.transformSequence) console.log('Face mesh data:', anonymized.mesh) } }
ts
async function handleImageCapture(event) { const captures = event.detail   // For face preset, you'll typically get a single capture for (const capture of captures) { // Get anonymized data including all processing stages const anonymized = await capture.anonymized() console.log(`Processing capture from ${capture.source}`) // Access different image variants anonymized.blobs.forEach((blobData) => { console.log(`${blobData.type}:`, blobData.data) // Process each image variant as needed }) // Access processing metadata console.log('Transform sequence:', anonymized.transformSequence) console.log('Face mesh data:', anonymized.mesh) } }

Multiple Capture Example (Face-180 preset):

ts
ts
async function handleImageCapture(event) { const captures = event.detail   // Process each capture (face-180 provides 3 captures: front, left, right) for (const capture of captures) { // Get anonymized data including all processing stages const anonymized = await capture.anonymized() console.log(`Processing ${capture.source} capture`) // Access different image variants anonymized.blobs.forEach((blobData) => { console.log(`${blobData.type}:`, blobData.data) // Process each image variant as needed }) // Access processing metadata console.log('Transform sequence:', anonymized.transformSequence) console.log('Face mesh data:', anonymized.mesh) } }
ts
async function handleImageCapture(event) { const captures = event.detail   // Process each capture (face-180 provides 3 captures: front, left, right) for (const capture of captures) { // Get anonymized data including all processing stages const anonymized = await capture.anonymized() console.log(`Processing ${capture.source} capture`) // Access different image variants anonymized.blobs.forEach((blobData) => { console.log(`${blobData.type}:`, blobData.data) // Process each image variant as needed }) // Access processing metadata console.log('Transform sequence:', anonymized.transformSequence) console.log('Face mesh data:', anonymized.mesh) } }

Available Image Variants

The anonymized() method returns an object containing multiple image processing stages:

  • originalImage – The unprocessed captured image as received from the camera or upload
  • imageRestored – Image after initial restoration and enhancement processing
  • segmentationMask – Binary mask showing detected face and skin regions for privacy processing
  • colorCorrectedImage – Image after color correction algorithm application for consistency
  • anonymizedImage – Final anonymized image with embedded metadata for traceability

Processing Metadata

In addition to image variants, you also get access to processing metadata:

  • transformSequence – Array of transformations applied during the anonymization process
  • mesh – Face mesh coordinates and landmark data used for processing

EXIF Metadata

The anonymized image includes EXIF metadata containing the mesh and transform sequence data, providing full traceability of the anonymization process. This ensures that the processing pipeline can be audited and reproduced if needed.

Recommendations for usage

  • Use the segmentationMask to understand which areas of the image were processed for privacy
  • The originalImage and anonymizedImage pair allows for before/after comparisons
  • Store the transformSequence and mesh data if you need to reproduce or validate the anonymization process
  • The color-corrected variant can be useful for applications requiring consistent image appearance across different lighting conditions
  • For face-180 preset: Process each capture separately as they represent different angles (front, left, right) of the same person
  • For face-180 preset: Consider the capture.source property to identify which angle each anonymized dataset corresponds to