Sunday, June 21, 2020

Social Login with Google OAuth2— Adobe Experience Manager (AEM)

Social Login with Google OAuth2— Adobe Experience Manager (AEM)

Social login is the ability to present the option for a site visitor to sign in with their social accounts like Facebook, Twitter, LinkedIn and etc. AEM supports OOTB Facebook and Twitter Social logins but Google login is not supported OOTB and need to build custom Provider to support the log in flow for websites.

AEM internally uses the scribejava module to support the Social login flows, scribejava supports multiple providers and both OAuth 1.0 and OAuth 2.0 protocols.

This tutorial explains the steps and the customization required to support the Google login in AEM as Cloud version, the same should work with minimal change for other AEM versions.


  • Google Account
  • AEM as Cloud Publisher
  • WKND Sample Website
  • Git Terminal
  • Maven
  • Google Login Flow




Auth Page URL

Access Token URL(POST)<authorization code received>&redirect_uri=


  • Create Project in Google Developers Console
  • Setup Custom Google OAuth Provider
  • Configure Service User
  • Configure OAuth Application and Provider
  • Enable OAuth Authentication
  • Test the Login Flow
  • Encapsulated Token Support
  • Sling Distribution user’s synchronization

Create Project in Google Developers Console

As a first step create a new project in google to setup OAuth Client, access

Click on “Create Project”


Enter a project name and click on Create


The project is created now


Let us now configure the OAuth client, access settings


Search for “API & Services”, click on “APIs & Services”


Click on Credentials


Click on “Create Credentials” then OAuth client ID


The Consent Screen should be configured to initiate the OAuth client id configurations


Select User Type as “Internal” or “External” based on your requirement — “Internal” is only available for G-Suite users


Enter the application name, Application Logo and Support Email


The scopes “email”, “profile” and “openid” are added by default, “profile” scope is enough for basic authentication.


Save the configurations now

Now again click on “Create Credentials” → OAuth client ID


Select the application type “Web Application” and enter application name


“Authorized Javascript origins”— the URL initiate the login, i am going with localhost for demo

“Authorized redirect URI’s” — the URL to be invoked on successful login,http://localhost:4503/callback/j_security_check (use the valid domain for real authentication)

Click on Create button


OAuth client is created now, copy the Client ID and Client Secret — these values required to enable the OAuth Authentication handler in AEM.


To use the client in production, the OAuth Consent Screen should be submitted for approval.

Click on “Configure Consent Screen” again


Enter the required values, “Authorized domains”, “Application Home Page Link”, “Application Privacy Policy Link” and submit for approval

The approval may takes days or weeks, meanwhile the project can be used for development


Google Project is ready for use now to test the login in flow

Configure Service User

Enable the service user with required permissions to manage the users in the system, you can use one of the existing service users with required access, I thought of defining new service user(oauth-google-service — name referred in, change the name if required)

Create a system user with name oauth-google-service, navigate to http://localhost:4503/crx/explorer/index.jsp and login as an admin user and click on user administration



Now enable the required permissions for the user, navigate to http://localhost:4503/useradmin(somehow I am still comfortable with useradmin UI for permission management)


Now enable the service user mapping for provider bundle — add an entry into Apache Sling Service User Mapper Service Amendment google.oauth.provider:oauth-google-service=oauth-google-service


Setup Custom Google OAuth Provider

As mentioned earlier AEM won’t support Google authentication OOTB, define a new provider to support the authentication with Google.

The custom Google Provider can be downloaded from — — Provider class to support the Google authentication — API class extended from default scribe DefaultApi20 to support Google OAuth 2.0 API integration — Service class to get the Access Token from Google service response

The provider bundle enabled with aem-sdk-api jar for AEM as Cloud Service, the other AEM versions can use the same bundle by changing aem-sdk-api to uber jar.

Clone the repository — git clone

Deploy google-oauth-provider bundle — change the directory to bundles\google-oauth-provider and execute mvn clean install -PautoInstallBundle -Daem.port=4503

Here I am going to enable the authentication for publisher websites, change the port number and deploy to Author if required.

After the successful deployment, you should able to see the Google provider in config manager.


The can be changed but the same value should b e used while configuring “Adobe Granite OAuth Application and Provider”.

Configure OAuth Application and Provider

Let us now enable the “Adobe Granite OAuth Application and Provider” for Google

Config ID — Enter a unique value, this value should be used while invoking the AEM login URL
Client ID — Copy the Client ID value from Google OAuth Client
Client Secret — Copy the Client Secret value from Google OAuth Client
Scope —”profile”
Provider ID — google
Create users — Select the check box to create AEM users for Google profiles
Callback URL — the same value configured in Google OAuth Client (http://localhost:4503/callback/j_security_check)


Enable OAuth Authentication

By default, “Adobe Granite OAuth Authentication Handler” is not enabled by default, the handler can be enabled by opening and saving without doing any changes.

Test the Login Flow

Now the configurations are ready, let us initiate the login — access http://localhost:4503/j_security_check?configid=google from browser(in real scenario you can enable a link or button pointing to this URL). This will take the user to Google Sign-in screen


Now you will be logged in to WKND website after successful login from Google Sign in page


The user profile is created in AEM



Whenever the profile data is changed (e.g family_name and given_name) in Google account the same will be reflected to AEM in subsequent login based on the “Apache Jackrabbit Oak Default Sync Handler” configuration.

AEM creates “Apache Jackrabbit Oak Default Sync Handler” configuration specific to each OAuth provider implementations.

The sync handler syncs the user profile data between the external authentication system and AEM repository.

The user profile data is synced based on the User Expiration Time setting, the user data will get synced on the subsequent login after the synced user data expired(default is 1 hr)

Modify the configurations based on the requirement.




Encapsulated Token Support

By default the authentication token is persisted in the repository under user’s profile. That means the authentication mechanism is stateful. Encapsulated Token is the way to configure stateless authentication. It ensures that the cookie can be validated without having to access the repository but the still the user should available in all the publishers for farm configuration.

Refer for more details on Encapsulated Token Support

Enable the Encapsulated Token Support in “Adobe Granite Token Authentication Handler”


Sling Distribution user’s synchronization

The users created in a publisher should be synced to all the other publishers in the farm to support the seamless authentication. I am not finding good reference document to explain the user sync in AEM as Cloud(AEM Communities features are not enabled in AEM as Cloud Service, the user sync was enabled through the community components for other AEM version), planning to cover the user sync in another tutorial.


This tutorial is mainly focused on enabling the authenticate the website users through Google account but the same solution can be used with small changes to support different providers. Feel free to give your feed back and changes on the provider bundle.

Saturday, June 13, 2020

Sling Content Distribution in AEM (Part 2) — Reverse Distribution

This tutorial is the continuation of earlier tutorial on Sling Content Distribution in AEM, refer the following URL for part1 tutorial -

In this tutorial let us see the details on Sling Reverse Distribution on AEM.


  • A reverse distribution setup allows one to transfer content from a farm of source instances(publisher) to a target instance(author). 
  • That is done by pulling the content from source instances(publisher) into the target instance(author).

This will help us to sync the data generated in farms of publishers into the Author instances.



  • configure a “queue” agent and package exporter on publisher(source instance)            
  • configure a “reverse" agent on author(target instance) pointing to the URL of the exporter on publish, multiple publisher endpoints can be configured            


  • Configure Reverse Agent in Author
  • Configure Queue agent  and exporter on Publisher
  • Enable Triggers – Scheduled/JCREvent
  • Test – CURL/Triggers

Configure Reverse Agent in Author

Configure a Reverse Agent in Author that will PULL distribution content from publishers endpoints based on the configuration.

Access http://localhost:4502/aem/start.html, Tools - Deployments - Distribution


Create new Distribution agent of type - Reverse Distribution 

Enter a name - "reverse"
Title - "reverse"
Check "Enabled"
Service Name - Service name is optional, if required create a service user with required permission
Change the lo level if required
Add exporter endpoint URL's - the URL point to the publisher, multiple endpoint URL's can be configured, http://localhost:4503/libs/sling/distribution/services/exporters/reverse(reverse is the Queue Distribution agent name of publisher)



Save the configurations , the agent is created now but the status is paused 


Resume the agent by clicking on the resume button on the agent detail page.



The agent is now ready to Pull the distribution content from publisher.

Configure Queue agent and exporter on Publisher

Configure a queue agent that places the changes into the queues and an exporter that exports packages from the queue agent.

Access http://localhost:4503/aem/start.html, Tools - Deployments - Distribution

Create new Distribution agent of type - Queue
Enter a name - "reverse"
Title - "reverse"
Check "Enabled"
Service Name - Service name is optional, if required create a service user with required permission
Change the lo level if required
Allowed Roots - Add the root paths the agent is responsible for distribution e.g /content/we-retail(if required multiple root paths can be configured )


Save the configurations, Queue Distribution Agent is enabled now


Let us now configure Exporter

Enter name - "reverse"
The target reference for the DistributionAgent that will be used to export packages - "(name=reverse)", here "reverse" is the queue agent name configured in the previous step


Now the initial configurations are ready, let us test the reverse distribution scenario through curl commands

Modify some content under /content/we-retail node in publisher


Execute the below curl commands

curl -u admin:admin http://localhost:4503/libs/sling/distribution/services/agents/reverse -d "action=ADD" -d "path=/content/we-retail/jcr:content"   (add the modified content to publisher Distribution queue)


Now the content is queued to the publisher distribution queue


curl -u admin:admin http://localhost:4502/libs/sling/distribution/services/agents/reverse -d "action=PULL" (PULL the content from publisher queue to author)


 Now the content is pulled to author


Let us now see how to automate the reverse distribution through triggers

Configure a JCR Event Trigger in Publisher

Configure a JCR Event Trigger in Publisher to add the JCR changes under the configured path to the Distribution queue

Access http://localhost:4503/system/console/configMgr/

Enter name - "reverse-sync"
Path for which the changes are distributed - "/content/we-retail"
Service Name - Enter the service name with required access, i am using the default one for demo(socialpubsync-distributionService), the trigger will not be activated without configuring the service user
Use deep distribution - Enable this if want to distribute the subtree of the configured node on any events


Now link the trigger to the "Apache Sling Distribution Agent - Queue Agents Factory"  configured with the name "reverse" in the earlier step, Triggers - (name=reverse-sync)


Configure a Scheduled Event Trigger in Author

Configure a Scheduled Event Trigger in Author to pull the content from publishers Distribution Queue

Enter name - "reverse-sync"
Distribution Type - "PULL"
Distributed Path, the path to be distributed periodically- "/content/we-retail"
Service Name - Enter the service name with required access, i am using the default one for demo(socialpubsync-distributionService),  the trigger will not be activated without configuring the service user
Interval in Seconds - he number of seconds between distribution requests. Default 30 seconds


Now link the trigger to the "Apache Sling Distribution Agent - Reverse Agents Factory"  configured with the name "reverse" in the earlier step, Triggers - (name=reverse-sync)


Now the content modification from publisher under /content/we-retail node will be synced to author on every 30 seconds

This concludes the reverse distribution configuration between publisher and author, the content changes from publisher is pulled to author. We can configure multiple publisher endpoints in the Author reverse distribution agent to pull the content changes . The triggers can be configured in Author and Publishers to completely automate the reverse distribution of the contents. Let us continue with distribution sync in next tutorial.

Thursday, June 11, 2020

CRXDE and Package manager not accessible in AEM publisher - Error while retrieving infos

CRXDE and Package manager not accessible in AEM publisher - Error while retrieving infos

I was facing an issue while accessing the CRXDE and Package manager on AEM publisher, the pages displayed as blank and keep loading


The response to the "http://localhost:4503/crx/de/init.jsp?_dc=1591923460206"  service was 500


The below exception was seen in the error log file

11.06.2020 19:49:20.558 *ERROR* [qtp1936302775-72] Error while retrieving infos
java.lang.NullPointerException: null
at [com.adobe.granite.crxde-lite:1.1.42]
at [com.adobe.granite.crxde-lite:1.1.42]
at [com.adobe.granite.crxde-lite:1.1.42]
at [com.adobe.granite.crxde-lite:1.1.42]
at [com.adobe.granite.crxde-lite:1.1.42]
at org.apache.felix.http.base.internal.handler.ServletHandler.handle( [org.apache.felix.http.jetty:4.0.8]
at org.apache.felix.http.base.internal.dispatch.InvocationChain.doFilter( [org.apache.felix.http.jetty:4.0.8]
at com.adobe.granite.license.impl.LicenseCheckFilter.doFilter( [com.adobe.granite.license:1.2.10]
at org.apache.felix.http.base.internal.handler.FilterHandler.handle( [org.apache.felix.http.jetty:4.0.8]
at org.apache.felix.http.base.internal.dispatch.InvocationChain.doFilter( [org.apache.felix.http.jetty:4.0.8]
at []
at org.apache.felix.http.base.internal.handler.FilterHandler.handle( [org.apache.felix.http.jetty:4.0.8]
at org.apache.felix.http.base.internal.dispatch.InvocationChain.doFilter( [org.apache.felix.http.jetty:4.0.8]
at org.apache.felix.http.base.internal.dispatch.Dispatcher$1.doFilter( [org.apache.felix.http.jetty:4.0.8]
at org.apache.felix.http.base.internal.whiteboard.WhiteboardManager$2.doFilter( [org.apache.felix.http.jetty:4.0.8]
at []
at org.apache.felix.http.base.internal.handler.PreprocessorHandler.handle( [org.apache.felix.http.jetty:4.0.8]
at org.apache.felix.http.base.internal.whiteboard.WhiteboardManager$2.doFilter( [org.apache.felix.http.jetty:4.0.8]
at org.apache.felix.http.sslfilter.internal.SslFilter.doFilter( [org.apache.felix.http.sslfilter:1.2.6]
at org.apache.felix.http.base.internal.handler.PreprocessorHandler.handle( [org.apache.felix.http.jetty:4.0.8]
at org.apache.felix.http.base.internal.whiteboard.WhiteboardManager$2.doFilter( [org.apache.felix.http.jetty:4.0.8]
at org.apache.felix.http.base.internal.whiteboard.WhiteboardManager.invokePreprocessors( [org.apache.felix.http.jetty:4.0.8]
at org.apache.felix.http.base.internal.dispatch.Dispatcher.dispatch( [org.apache.felix.http.jetty:4.0.8]
at org.apache.felix.http.base.internal.dispatch.DispatcherServlet.service( [org.apache.felix.http.jetty:4.0.8]
at javax.servlet.http.HttpServlet.service( [org.apache.felix.http.servlet-api:1.1.2]
at org.eclipse.jetty.servlet.ServletHolder.handle( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.servlet.ServletHandler.doHandle( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.session.SessionHandler.doHandle( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.handler.ContextHandler.doHandle( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.servlet.ServletHandler.doScope( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.session.SessionHandler.doScope( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.handler.ContextHandler.doScope( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.handler.ScopedHandler.handle( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.handler.HandlerWrapper.handle( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.Server.handle( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.HttpChannel.handle( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.HttpConnection.onFillable( [org.apache.felix.http.jetty:4.0.8]
at$ReadCallback.succeeded( [org.apache.felix.http.jetty:4.0.8]
at [org.apache.felix.http.jetty:4.0.8]
at$ [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce( [org.apache.felix.http.jetty:4.0.8]
at [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.util.thread.QueuedThreadPool$ [org.apache.felix.http.jetty:4.0.8]
at java.base/

The CRXDE/Package Manager was accessible by login through crx/explorer(http://localhost:4503/crx/explorer/index.jsp) or through system/console(http://localhost:4503/system/console) but not directly.

After analysis, identified the issue can happen in one of the below scenarios
  • Read access for the anonymous user to read own user node has removed
  • Anonymous user deleted by mistake
The actual root cause for our issue was first scenario -  Read access for the anonymous user to read own user node has removed, the anonymous user did not have the permission to access own user node - by default AEM enables the read permission to the anonymous user to access its own user node.

The issue was caused after accidentally removing the read access to anonymous user to read own user node.



The issue got resolved after re-enabling the read access for anonymous user to own user node



Another option to resolve the issue is deleting the anonymous user(or deleted accidentally ) and restarting the server - the anonymous user recreated upon restart but unfortunately AEM not enabling the read access for the anonymous user to own user node on recreation - the permission was enabled only during the initial server setup.

The  problem should be addressed by enabling the read permission manually.

Sunday, June 7, 2020

How to enable Google CDN for custom origin websites | Google CDN for external websites

How to enable Google CDN for custom origin websites | Google CDN for external websites


Cloud CDN by Google is a low-latency content delivery solution for small to enterprise business. Cloud CDN (Content Delivery Network) uses Google's globally distributed edge points of presence to cache external HTTP(S) load balanced content close to your users, when a user goes to your website, they retrieve your content from the nearest CDN location rather than from your web server. Caching content at the edges of Google's network provides faster delivery of content to your users while reducing serving costs.

Some of the features of Google Cloud CDN
  • HTTP/2 support
  • Global distribution with anycast IP
  • Integrated with Google Cloud – Cloud Monitoring and Cloud Logging
  • Purge cache instantly
This tutorial explains the steps required to enable the Google CDN in front of the custom origin websites running outside of Google Cloud platform.


  • Google Cloud Account
  • Externally accessible websites


I have a website running in external server –, I want to activate the CDN in front of my website.



  • Register for a Google Cloud Account
  • Create new HTTP(S) Load Balancer
  • Enable CDN
  • Enable A-Record for DNS

Register for a Google Cloud CDN

If you not enabled for Google Cloud CDN then access and click on “Try Cloud CDN Free”



Configure the Payment Profile. This will take sometime to complete the configuration; you should be seeing “Compute Engine is getting Ready”, once after the Compute Engine is ready you will see the option to add CDN Origin.


Create new HTTP(S) Load Balancer

As a first step, let us configure the new HTTP(S) Load Balancer. Click Add origin on the previous screen and Click on Continue on the Next screen. Select “Use a custom origin” and click on “Create a load balancer.


Enter a unique name for the load balancer. Click on back end configuration and create a backend service


Enter a name and select backend Type as “Internet Network endpoint group”, change the protocol as required – this protocol will be used by Google Load balancer to connect to the origin server. I am going with HTTP protocol for the demo

Select “Create Internet network endpoint group” from "Internet network endpoint group" dropdown


Enter a name for Internet network endpoint group, select “Network endpoint group type” as Network endpoint group (Internet).

Add the default port to connect to origin server – my case the server is accessible through port 80, enter the IP address or the Fully qualified name of the origin server – I am configuring the IP address

Click on Create

Select the created Internet network endpoint group in backend service configuration screen and click on Done

Select "Enable Cloud CDN" and click on create


Select Host and Path Rules and enable the require rules – I am going with default rule configurations 

Now click on Frontend Configurations


Enter a name

Select the protocol users uses to connect to the website, the SSL certificate should be uploaded if the protocol is selected as HTTPS with supported Domains.


You can upload the existing certificate or create google managed SSL certificate

I am going with HTTP for the demo


Create new IP address – the default IP address lives for short time, create a static IP address.



Review the configurations and click on Create


Now the HTTP(S) Load Configuration is ready and enabled with CDN

Enable A-Record for DNS

Now your website is accessible through the Google CDN Front end IP, add a A-Record pointing to the Frond End Ip for the website DNS through your network provider – my case cloudflare

e.g A-Record

Now should be accessible through Google CDN, you can monitor the traffics through console and also the cache path can be invalidated whenever required.


My backend server is running on Apache, I have enabled the Virtual host configuration to support - Google CDN sends the DNS values as part of Host header, enable different virtual host to support multiple websites.

<VirtualHost *:80>
    ServerAdmin [email protected]
    DocumentRoot "C:\opt\communique\dispatcher\cache"
    ServerAlias localhost

    RewriteEngine On

   RewriteRule ^/test.html$ [L]

   <Directory />

Options Indexes FollowSymLinks Includes   
# Set includes to process .html files
AddOutputFilter INCLUDES .html
AddOutputFilterByType INCLUDES text/html
AllowOverride None


This concludes CDN is enabled on the custom origin website. The same setup can be used to support multiple websites, the backend(origin) should support the required DNS – Google CDN sends the Host header with the DNS value the user is accessing from the browser. The host header in the origin server can be used to enable the DNS specific functionalities. The Google CDN improves the website performance by caching and serving the website content from network of servers across the world, the server will be contacted in case the requested content is not available in the CDN cache.