Sunday, May 3, 2020

How to enable search synonyms in AEM with Lucene?

How to enable search synonyms in AEM with Lucene?


This tutorial explains how to enable search synonyms in AEM with Lucene.


Search Synonyms


Synonyms are used to inform the search engine that searching for one word should also search for others e.g searching for gigabyte should also consider gigabytes, gib and gb etc

The analyzer should be configured for custom oak index to support the search synonyms.

Refer the following tutorial to configure custom oak index and analyzers

https://www.albinsblog.com/2020/04/oak-lucene-index-improve-query-in-aem-configure-lucene-index.html#.Xu7oD2hKjb1

https://www.albinsblog.com/2020/05/how-to-enable-case-insensitive-search-in-aem-lucene.html#.Xu7nAWhKjb1

I have a data node with id property value as "gigabyte", the node will be returned while searching with the value "gigabyte" but not returned while searching with "gigabytes", "gib" or "gb".

aem-search-synonyms-with-lucene1

aem-search-synonyms-with-lucene2.png

aem-search-synonyms-with-lucene

Configure Analyzer


Let us now configure the Analyzer to support the search synonyms.

Create a node "Synonym" with the the primary type of  "nt:unstructured" under analyzers\default\filters (refer the sample configuration package from the git link posted in the bottom of the tutorial)

aem-search-synonyms-with-lucene


Add the following properties to synonyms node

format - solr or wordnet
synonyms - synonyms.txt, file with synonyms definitions

There are two possible formats for the dictionary, I am using solr format for the demo
  • wordnet: based on the popular Wordnet community. This required the synonyms configuration in specific format
  • solr: it’s more plain text. Comma separated values 
The synonym.txt file is a simple comma-separated list of synonyms. All matching terms should exist in a single row. Any word that is searched in the row will match all other words in that same row. Common uses for Synonyms are matching on variations of a word.

synonyms.txt

GB,gib,gigabyte,gigabytes
MB,mib,megabyte,megabytes
Television, Televisions, TV, TVs

The configurations are ready now, let us re-index the data. Change the value of reindex property under the custom index to true - this will initiate the re-indexing, the property value will be changed to false once the re-indexing is initiated

aem-search-synonyms-with-lucene

Wait for few minutes to the index to complete

Let us now search with "gib" - there is no node with value "gib" for id property under /content/sampledata but "gib" is configured as a synonym for "gigabyte" so the node with id property value "gigabyte" will be returned as a result.

aem-search-synonyms-with-lucene


Synonyms Configuration - https://github.com/techforum-repo/youttubedata/tree/master/lucene


Define a custom index with synonyms and configure all the possible synonyms to the synonyms.txt file. The configured data can be used to search the data with different synonyms.


Saturday, May 2, 2020

How to enable case insensitive search in AEM with Lucene?

How to enable case insensitive search in AEM with Lucene?


This tutorial explains how to enable case insensitive search in AEM with Lucene.

I have two nodes with property "id" under /content/sampledata with same value in different case e.g TEST and test

aem-case-insensitive-search-with-lucene1

aem-case-insensitive-search-with-lucene

By default, the Lucene search is case sensitive so the query will return only the matching node - the node matching with value "test"

aem-case-insensitive-search-with-lucene

The case insensitive can be enabled through analyzers in Lucene index

Refer the following URL to configure the custom Lucene index - https://www.albinsblog.com/2020/04/oak-lucene-index-improve-query-in-aem-configure-lucene-index.html#.Xu7oD2hKjb1

Analysis, in Lucene, is the process of converting field text into its most fundamental indexed representation, terms. These terms are used to determine what documents match a query during searching.

An analyzer tokenizes text by performing any number of operations on it, which could include extracting words, discarding punctuation, removing accents from characters, lowercasing (also called normalizing), removing common words, reducing words to a root form (stemming), or changing words into the basic form (lemmatization).

This process is also called tokenization, and the chunks of text pulled from a stream of text are called tokens. Tokens, combined with their associated field name, are terms.

In Lucene, an analyzer is a java class that implements a specific analysis. Analysis occurs any time text needs to be converted into terms, which in Lucene’s core is at two spots: during indexing and when searching.

An analyzer chain starts with a Tokenizer, to produce initial tokens from the characters read from a Reader, then modifies the tokens with any number of chained TokenFilters.

Let’s see the some important built-in analyzer available in Lucene bundle:

WhitespaceAnalyzer, as the name implies, splits text into tokens on whitespace characters and makes no other effort to normalize the tokens. It doesn’t lowercase each token. 

SimpleAnalyzer first splits tokens at nonletter characters, then lowercases each token. Be careful! This analyzer quietly discards numeric characters but keeps all other characters. 

StopAnalyzer is the same as SimpleAnalyzer, except it removes common words. By default, it removes common words specific to the English language (the, a, etc.), though you can pass in your own set.

KeywordAnalyzer treats entire text as a single token. 

StandardAnalyzer is Lucene’s most sophisticated core analyzer. It has quite a bit of logic to identify certain kinds of tokens, such as company names, email addresses, and hostnames. It also lowercases each token and removes stop words and punctuation. 

The built-in analyzers can be used directly by specifying the class name or the analyzers can be composed with tokenizer and series of filters – there are multiple Tokenizers e.g Standard, Keyword, CharTokenizers and Token Filters e.g Stop, LowerCase,Standard 

org.apache.lucene.analysis.standard.StandardAnalyzer 

org.apache.lucene.analysis.core.SimpleAnalyzer

There are multiple standard analyzers, tokenizers and Token Filters, custom analyzers/Tokenizer/TokenFilters can be created if required. 

The analyzers is configured via the analyzers node (of type nt:unstructured) inside the oak:index definition. 

The default analyzer for an index is configured in the default child of the analyzer’s node(of type nt:unstructured). 

Create a child node with name "tokenizer" of type "nt:untstrutured" under default node and add the property "name" with value "Standard"

aem-case-insensitive-search-with-lucene

Create a child node with name "filters" of type "nt:untstrutured" under default node.

Create a child node with name "LowerCase" of type "nt:untstrutured" 

This will index the data by lower casing and also match data by lower casing during the search.

The configurations are ready now, let us index the data. Change the value of reindex property under the custom index to true - this will initiate the re-indexing, the property value will be changed to false once the re-indexing is initiated

aem-case-insensitive-search-with-lucene

Let me now re-execute the query, this time the query returns both nodes with uppercase and lowercase values(TEST and test)

aem-case-insensitive-search-with-lucene





Saturday, April 25, 2020

AEM Core components deep dive | How to extend AEM core components | Proxy Components in AEM

AEM Core components deep dive | How to extend AEM core components | Proxy Components in AEM


This tutorial explains the details on AEM core components

In AEM, we use either custom or OOTB components to build the website, these components are called as WCM(Web Content Management) components.

aem-wcm-core-components

Core components 


  •  introduced in AEM 6.2 but are strongly recommended to use from 6.3
  •  provides robust extensible base components
  •  built on the latest technology and follows the best practices
  •  adhering to accessibility guidelines and compliant with the WCAG 2.0 AA standard
  •  makes page authoring more flexible and customizable
  • simple to extend to offer custom functionalities


Core components - Features


aem-wcm-core-components

Core Components - Architecture


aem-wcm-core-components


  • Design dialog defines what authors can or cannot do it in the edit dialog
  • Edit dialog shows authors only options they are allowed to use
  • Sling model verifies and prepares the content for the view(template)
  • The result of the sling model can be serialized to JSON for SPA use cases
  • HTL renders the HTML server-side for traditional server-side rendering
  • The HTML output is semantic, accessible, search-engine optimized and easy to style.
The Core components follows the MVC (Model View Controller) design pattern.

aem-wcm-core-components


aem-wcm-core-components

The Controller(Content) refers the proxy component (View), the proxy component is extended from WCM core components. Proxy Components are site-specific components which are inherited from core components, these are called as proxy component as no implementation is required but refers the core component as the resourceSuperType, can be extended as required.

The core components refers the Sling Model (Model) to retrieve the required data for rendering. 

The Controller(Content) also refers the templates and configuration policies.

The Delegate pattern can be used to extend the model class for the core component, the responsibility of the model class is delegated to the implementations based on the resource type.

The core components uses the model interfaces and delegated to the right model implementation at runtime based on the resourceType in the model implementation.

 Enable Core Components


The Core Components are open sourced through github, the components implementation and details can be referred from https://github.com/adobe/aem-core-wcm-components

Either one of the below approach can be followed to install the Core Components to AEM
  • Install through package manager
  • Include through code package


Install through package manager


The latest version is listed in the github page, the previous versions can be accessed by clicking on "Historical System Requirements"

aem-wcm-core-components

aem-wcm-core-components


Click on the version based on the system requirement and download the packages

aem-wcm-core-components

Install the downloaded package through package manager, the core components are now enabled to the AEM instance

aem-wcm-core-components





Saturday, April 18, 2020

How to configure robotx.txt file in AEM

How to configure robotx.txt file in AEM

This video explains the details on configuring robotx.txt file in AEM




Friday, April 10, 2020

Oak Lucene Index - Improve the query performance in AEM(Adobe Experience Manager) | Configure Oak Lucene Index in AEM

Oak Lucene Index - Improve the query performance in AEM(Adobe Experience Manager) | Configure Oak Lucene Index in AEM


This tutorial explains the details on enabling Oak Lucene Index to improve the query performance in AEM(Adobe Experience Manager)

OAK Lucene Index


For queries to perform well, Oak supports indexing of content that is stored in the repository. When a JCR query gets executed, usually it searches the index first. If there is no index, the query executes for the entire content. This is time consuming and an overhead for the AEM. A query can be executed without an index, but for large datasets, it will execute very slowly, or even abort.

There are three types of indexing mode available that defines how comparing is performed, and when the index content gets updated

Synchronous Indexing - Under synchronous indexing, the index content gets updates as part of the commit itself. Changes to both the main content, as well as the index content, are done atomically in a single commit. The new content is added into the index as soon as available.

Asynchronous Indexing - Asynchronous indexing (also called async indexing) is performed using periodic scheduled jobs. As part of the setup, Oak schedules certain periodic jobs which perform diff of the repository content, and update the index content based on that. This will provide better performance but the new content will not be available immediately to the index.

Near Real Time (NRT) Indexing - This method indicates that index is a near real time index.

Indexing uses Commit Editors. Some of the editors are of type IndexEditor, which are responsible for updating index content based on changes in main content. Currently, Oak has following in built editors:

  • PropertyIndexEditor
  • ReferenceEditor
  • LuceneIndexEditor
  • SolrIndexEditor

There are 3 main types of indexes available in AEM :

  • Lucene – asynchronous (full text and property) - Recommended
  • Property – synchronous [ Prefer only when you need synchronous results ]
  • Solr – asynchronous


Configure Lucene Index in AEM



Oak supports Lucene based indexes to support both property constraint and full text constraints. Depending on the configuration a Lucene index can be used to evaluate property constraints, full text constraints, path restrictions and sorting.

If multiple indexers are available for a query, each available indexer estimates the cost of executing the query. Oak then chooses the indexer with the lowest estimated cost.

I have a large data sets(12k) under "/content/sampledata" with id property, the id property valy of all nodes starts with '1111'

aem-oak-lucene-index

Let me now execute a query to fetch all the nodes under "/content/sampledata" those id property value start with '1111'

select * from [nt:unstructured] where [jcr:path] like '/content/sampledata/%' and id LIKE '%1111%'

aem-oak-lucene-index

The query execution failed with the following exception "The query read or traversed more than 100000 nodes. To avoid affecting other tasks, processing was stopped"

The 100000 is the queryLimitReads value, queryLimitReads value can be changed but the query fails again after reaching the limit and also this will impact the overall system performance.

The queryLimitReads value can be changed through the following OSGI configuration -http://localhost:4502/system/console/configMgr/org.apache.jackrabbit.oak.query.QueryEngineSettingsService

aem-oak-lucene-index

The query execution behavior can be reviewed through Query Performance Tool

aem-oak-lucene-index
  
aem-oak-lucene-index

aem-oak-lucene-index

This will display the slow queries and popular queries, also Explain query explains the query execution details.

aem-oak-lucene-index

aem-oak-lucene-index


There is no index defined for the query so executed as a full traversal, the query traversed more than 100000 and aborted to avoid the impact on other activities.

Let us see now how to define a Lucene index to improve the query performance

Create a node with name "testindex" under oak:index with the following properties

jcr:primaryType - oak:QueryIndexDefinition
type - lucene
includedPaths - /content/sampledata
fullTextEnabled - false
evaluatePathRestrictions - true
compatVersion - 2
async - async, nrt


aem-oak-lucene-index

On Save this will create a node with name "indexRules"

aem-oak-lucene-index

By default, a node with name nt:base created under indexRules. Rename the node to the primaryType of nodes need to be indexed, our case "nt:unstructured"

aem-oak-lucene-index

aem-lucene-index


There a default node with name prop0 created under properties, rename prop0 to the property need to be indexed, our case "id" and enable the below properties

id:
propertyIndex - true
ordered - true
name - id
isRegexp - false

https://oakutils.appspot.com/generate/index utility can be used to generate the index definitions.

aem-oak-lucene-index


aem-oak-lucene-index

Let us now reindex the data, Change the reindex property to true to initiate the asynchronous indexing

aem-oak-lucene-index

The reindex property value will be changed to false after initiating the index, wait for sometime to the index to complete.

Re-execute the query. The query is now working with out any issues and with better performance, the query is executed with the defined Lucene index.

aem-oak-lucene-index

aem-oak-lucene-index



This is always best practice to review the slow running custom queries and configure the required indexing to improve the performance. If there are multiple indexes defined, oak considers the best index to execute the query.