Thursday, June 11, 2020

CRXDE and Package manager not accessible in AEM publisher - Error while retrieving infos

CRXDE and Package manager not accessible in AEM publisher - Error while retrieving infos

I was facing an issue while accessing the CRXDE and Package manager on AEM publisher, the pages displayed as blank and keep loading


The response to the "http://localhost:4503/crx/de/init.jsp?_dc=1591923460206"  service was 500


The below exception was seen in the error log file

11.06.2020 19:49:20.558 *ERROR* [qtp1936302775-72] Error while retrieving infos
java.lang.NullPointerException: null
at [com.adobe.granite.crxde-lite:1.1.42]
at [com.adobe.granite.crxde-lite:1.1.42]
at [com.adobe.granite.crxde-lite:1.1.42]
at [com.adobe.granite.crxde-lite:1.1.42]
at [com.adobe.granite.crxde-lite:1.1.42]
at org.apache.felix.http.base.internal.handler.ServletHandler.handle( [org.apache.felix.http.jetty:4.0.8]
at org.apache.felix.http.base.internal.dispatch.InvocationChain.doFilter( [org.apache.felix.http.jetty:4.0.8]
at com.adobe.granite.license.impl.LicenseCheckFilter.doFilter( [com.adobe.granite.license:1.2.10]
at org.apache.felix.http.base.internal.handler.FilterHandler.handle( [org.apache.felix.http.jetty:4.0.8]
at org.apache.felix.http.base.internal.dispatch.InvocationChain.doFilter( [org.apache.felix.http.jetty:4.0.8]
at []
at org.apache.felix.http.base.internal.handler.FilterHandler.handle( [org.apache.felix.http.jetty:4.0.8]
at org.apache.felix.http.base.internal.dispatch.InvocationChain.doFilter( [org.apache.felix.http.jetty:4.0.8]
at org.apache.felix.http.base.internal.dispatch.Dispatcher$1.doFilter( [org.apache.felix.http.jetty:4.0.8]
at org.apache.felix.http.base.internal.whiteboard.WhiteboardManager$2.doFilter( [org.apache.felix.http.jetty:4.0.8]
at []
at org.apache.felix.http.base.internal.handler.PreprocessorHandler.handle( [org.apache.felix.http.jetty:4.0.8]
at org.apache.felix.http.base.internal.whiteboard.WhiteboardManager$2.doFilter( [org.apache.felix.http.jetty:4.0.8]
at org.apache.felix.http.sslfilter.internal.SslFilter.doFilter( [org.apache.felix.http.sslfilter:1.2.6]
at org.apache.felix.http.base.internal.handler.PreprocessorHandler.handle( [org.apache.felix.http.jetty:4.0.8]
at org.apache.felix.http.base.internal.whiteboard.WhiteboardManager$2.doFilter( [org.apache.felix.http.jetty:4.0.8]
at org.apache.felix.http.base.internal.whiteboard.WhiteboardManager.invokePreprocessors( [org.apache.felix.http.jetty:4.0.8]
at org.apache.felix.http.base.internal.dispatch.Dispatcher.dispatch( [org.apache.felix.http.jetty:4.0.8]
at org.apache.felix.http.base.internal.dispatch.DispatcherServlet.service( [org.apache.felix.http.jetty:4.0.8]
at javax.servlet.http.HttpServlet.service( [org.apache.felix.http.servlet-api:1.1.2]
at org.eclipse.jetty.servlet.ServletHolder.handle( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.servlet.ServletHandler.doHandle( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.session.SessionHandler.doHandle( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.handler.ContextHandler.doHandle( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.servlet.ServletHandler.doScope( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.session.SessionHandler.doScope( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.handler.ContextHandler.doScope( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.handler.ScopedHandler.handle( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.handler.HandlerWrapper.handle( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.Server.handle( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.HttpChannel.handle( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.server.HttpConnection.onFillable( [org.apache.felix.http.jetty:4.0.8]
at$ReadCallback.succeeded( [org.apache.felix.http.jetty:4.0.8]
at [org.apache.felix.http.jetty:4.0.8]
at$ [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce( [org.apache.felix.http.jetty:4.0.8]
at [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob( [org.apache.felix.http.jetty:4.0.8]
at org.eclipse.jetty.util.thread.QueuedThreadPool$ [org.apache.felix.http.jetty:4.0.8]
at java.base/

The CRXDE/Package Manager was accessible by login through crx/explorer(http://localhost:4503/crx/explorer/index.jsp) or through system/console(http://localhost:4503/system/console) but not directly.

After analysis, identified the issue can happen in one of the below scenarios
  • Read access for the anonymous user to read own user node has removed
  • Anonymous user deleted by mistake
The actual root cause for our issue was first scenario -  Read access for the anonymous user to read own user node has removed, the anonymous user did not have the permission to access own user node - by default AEM enables the read permission to the anonymous user to access its own user node.

The issue was caused after accidentally removing the read access to anonymous user to read own user node.



The issue got resolved after re-enabling the read access for anonymous user to own user node



Another option to resolve the issue is deleting the anonymous user(or deleted accidentally ) and restarting the server - the anonymous user recreated upon restart but unfortunately AEM not enabling the read access for the anonymous user to own user node on recreation - the permission was enabled only during the initial server setup.

The  problem should be addressed by enabling the read permission manually.

Sunday, June 7, 2020

How to enable Google CDN for custom origin websites | Google CDN for external websites

How to enable Google CDN for custom origin websites | Google CDN for external websites


Cloud CDN by Google is a low-latency content delivery solution for small to enterprise business. Cloud CDN (Content Delivery Network) uses Google's globally distributed edge points of presence to cache external HTTP(S) load balanced content close to your users, when a user goes to your website, they retrieve your content from the nearest CDN location rather than from your web server. Caching content at the edges of Google's network provides faster delivery of content to your users while reducing serving costs.

Some of the features of Google Cloud CDN
  • HTTP/2 support
  • Global distribution with anycast IP
  • Integrated with Google Cloud – Cloud Monitoring and Cloud Logging
  • Purge cache instantly
This tutorial explains the steps required to enable the Google CDN in front of the custom origin websites running outside of Google Cloud platform.


  • Google Cloud Account
  • Externally accessible websites


I have a website running in external server –, I want to activate the CDN in front of my website.



  • Register for a Google Cloud Account
  • Create new HTTP(S) Load Balancer
  • Enable CDN
  • Enable A-Record for DNS

Register for a Google Cloud CDN

If you not enabled for Google Cloud CDN then access and click on “Try Cloud CDN Free”



Configure the Payment Profile. This will take sometime to complete the configuration; you should be seeing “Compute Engine is getting Ready”, once after the Compute Engine is ready you will see the option to add CDN Origin.


Create new HTTP(S) Load Balancer

As a first step, let us configure the new HTTP(S) Load Balancer. Click Add origin on the previous screen and Click on Continue on the Next screen. Select “Use a custom origin” and click on “Create a load balancer.


Enter a unique name for the load balancer. Click on back end configuration and create a backend service


Enter a name and select backend Type as “Internet Network endpoint group”, change the protocol as required – this protocol will be used by Google Load balancer to connect to the origin server. I am going with HTTP protocol for the demo

Select “Create Internet network endpoint group” from "Internet network endpoint group" dropdown


Enter a name for Internet network endpoint group, select “Network endpoint group type” as Network endpoint group (Internet).

Add the default port to connect to origin server – my case the server is accessible through port 80, enter the IP address or the Fully qualified name of the origin server – I am configuring the IP address

Click on Create

Select the created Internet network endpoint group in backend service configuration screen and click on Done

Select "Enable Cloud CDN" and click on create


Select Host and Path Rules and enable the require rules – I am going with default rule configurations 

Now click on Frontend Configurations


Enter a name

Select the protocol users uses to connect to the website, the SSL certificate should be uploaded if the protocol is selected as HTTPS with supported Domains.


You can upload the existing certificate or create google managed SSL certificate

I am going with HTTP for the demo


Create new IP address – the default IP address lives for short time, create a static IP address.



Review the configurations and click on Create


Now the HTTP(S) Load Configuration is ready and enabled with CDN

Enable A-Record for DNS

Now your website is accessible through the Google CDN Front end IP, add a A-Record pointing to the Frond End Ip for the website DNS through your network provider – my case cloudflare

e.g A-Record

Now should be accessible through Google CDN, you can monitor the traffics through console and also the cache path can be invalidated whenever required.


My backend server is running on Apache, I have enabled the Virtual host configuration to support - Google CDN sends the DNS values as part of Host header, enable different virtual host to support multiple websites.

<VirtualHost *:80>
    ServerAdmin [email protected]
    DocumentRoot "C:\opt\communique\dispatcher\cache"
    ServerAlias localhost

    RewriteEngine On

   RewriteRule ^/test.html$ [L]

   <Directory />

Options Indexes FollowSymLinks Includes   
# Set includes to process .html files
AddOutputFilter INCLUDES .html
AddOutputFilterByType INCLUDES text/html
AllowOverride None


This concludes CDN is enabled on the custom origin website. The same setup can be used to support multiple websites, the backend(origin) should support the required DNS – Google CDN sends the Host header with the DNS value the user is accessing from the browser. The host header in the origin server can be used to enable the DNS specific functionalities. The Google CDN improves the website performance by caching and serving the website content from network of servers across the world, the server will be contacted in case the requested content is not available in the CDN cache.

Sunday, May 31, 2020

Social Login with LinkedIn - Adobe Experience Manager (AEM)

Social Login with LinkedIn - Adobe Experience Manager (AEM)

Social login is the ability to present the option for a site visitor to sign in with their social accounts like Facebook, Twitter, LinkedIn and etc. AEM supports OOTB Facebook and Twitter Social logins but LinkedIn login is not supported OOTB and need to build custom Provider to support the log in flow for websites. 

AEM internally uses the scribejava module to support the Social login flows, scribejava supports multiple providers and both OAuth 1.0 and OAuth 2.0 protocols. The scribe version shipped with AEM won’t support the LinkedIn OAuth 2.0 authentication flow but OAuth 1.0 is supported.
This tutorial explains the steps and the customization required to support the LinkedIn social login in AEM as Cloud version, the same should work with minimal change for other AEM versions.


  • LinkedIn Developer Account
  • AEM as Cloud Publisher
  • WKND Sample Website
  • Git Terminal
  • Maven

LinkedIn Login Flow


AEM Login URL 


LinkedIn Authorization Page URL 
Access Token URL(GET) - grant_type=authorization_code&client_id=&client_secret&code=<authorization code received>&redirect_uri=

Access Token URL(GET) grant_type=authorization_code&client_id=&client_secret&code=<authorization code received>&redirect_uri=

Retrieve Profile Data,localizedFirstName,localizedLastName)? oauth2_access_token=<Access Token>


  • LinkedIn App Setup
  • Setup Custom LinkedIn OAuth Provider
  • Configure Service User
  • Configure OAuth Application and Provider
  • Enable OAuth Authentication
  • Test the Login Flow
  • Encapsulated Token Support
  • Sling Distribution user’s synchronization

LinkedIn App Setup

As a first step, we should setup the OAuth app from LinkedIn. Login to and click on Create App


Enter your app name, link to an existing LinkedIn Company or create new one. Upload a logo for your App, accept the Legal agreement and click on Create App. You should verify the company page(Generate a Verification URL and click on the URL to approve the app association with the Company)


Add the redirect URL in Auth tab - http://localhost:4503/callback/j_security_check


On product tabs select “Sign in with LinkedIn”


This will take some time for approval, once approved you can see the required permissions under 


LinkedIn App is ready now, copy the Client ID and Client Secret(reveal the secret before copying) - these values required to enable the OAuth Authentication handler in AEM.

Configure Service User

Enable the service user with required permissions to manage the users in the system, you can use one of the existing service users with required access, I thought of defining new service user(oauth-linkedin-service – name referred in, change the name if required) 
Create a system user with name oauth-linkedin-service, navigate to http://localhost:4503/crx/explorer/index.jsp and login as an admin user and click on user administration



Now enable the required permissions for the user, navigate to http://localhost:4503/useradmin(somehow I am still comfortable with useradmin UI for permission management)


Now enable the service user mapping for provider bundle – add an entry into Apache Sling Service User Mapper Service Amendment linkedin.oauth.provider:oauth-linkedin-service=oauth-linkedin-service


Setup Custom LinkedIn OAuth Provider

As mentioned earlier AEM won’t support LinkedIn authentication OOTB, define a new provider to support the authentication with LinkedIn. Refer for sample LinkedIn provider. 

The reference provider supports only OAuth 1.0, to support the OAuth 2.0 the provider should be modified and some extra overridden classes required as the AEM shipped scribe package wont support OAuth 2.0 for LinkedIn Provider. 

The LinkedIn Provider to support the OAuth 2.0 can be downloaded from - – Provider class to support the LinkedIn authentication – API class extended from default scribe DefaultApi20 to support LinkedIn OAuth 2.0 API integration – Service class to get the Access Token from LinkedIn service response – Extract the access token from LinkedIn authorization response

The provider bundle enabled with aem-sdk-api jar for AEM as Cloud Service, the other AEM versions can use the same bundle by changing aem-sdk-api to uber jar.

Clone the repository - git clone

Deploy linkedin-oauth-provider bundle – change the directory to bundles\linkedin-oauth-provider and execute mvn clean install -PautoInstallBundle -Daem.port=4503

Here I am going to enable the authentication for publisher websites, change the port number and deploy to Author if required. After the successful deployment, you should able to see the LinkedIn provider in config manager.


The can be changed but the same value should b e used while configuring “Adobe Granite OAuth Application and Provider”.

Configure OAuth Application and Provider

Let us now enable the “Adobe Granite OAuth Application and Provider” for LinkedIn

Config ID – Enter a unique value, this value should be used while invoking the AEM login URL
Client ID – Copy the Client ID value from LinkedIn App
Client Secret - Copy the Client Secret value from LinkedIn App(Copy the secret by reveling the value)
Scope - r_liteprofile r_emailaddress
Provider ID – linkedin
Create users – Select the check box to create AEM users for LinkedIn profiles
Callback URL – the same value configured in LinkedIn App(http://localhost:4503/callback/j_security_check)


Enable OAuth Authentication

By default, “Adobe Granite OAuth Authentication Handler” is not enabled by default, the handler can be enabled by opening and saving without doing any changes.

Test the Login Flow

Now the configurations are ready, let us initiate the login – access http://localhost:4503/j_security_check?configid=linkedin from browser(in real scenario you can enable a link or button pointing to this URL). This will take the user to LinkedIn login screen


The user should allow the access for first time


Now you should be able to login to the WKND website


The user profile is created in AEM



Whenever the profile data is changed (e.g firstName and LastName) in LinkedIn the same will be reflected to AEM in subsequent login based on the “Apache Jackrabbit Oak Default Sync Handler” configuration.

AEM creates “Apache Jackrabbit Oak Default Sync Handler” configuration specific to each OAuth provider implementations.

The sync handler syncs the user profile data between the external authentication system and AEM repository.

The user profile data is synced based on the User Expiration Time setting, the user data will get synced on the subsequent login after the synced user data expired(default is 1 hr). Modify the configurations based on the requirement.



Encapsulated Token Support

By default the authentication token is persisted in the repository under user's profile. That means the authentication mechanism is stateful. Encapsulated Token is the way to configure stateless authentication. It ensures that the cookie can be validated without having to access the repository but the still the user should available in all the publishers for farm configuration.

Refer for more details on Encapsulated Token Support

Enable the Encapsulated Token Support in "Adobe Granite Token Authentication Handler"


Sling Distribution user’s synchronization

The users created in a publisher should be synced to all the other publishers in the farm to support the seamless authentication. I am not finding good reference document to explain the user sync in AEM as Cloud(AEM Communities features are not enabled in AEM as Cloud Service, the user sync was enabled through the community components for other AEM version), planning to cover the user sync in another tutorial.


This tutorial is mainly focused on enabling the authenticate the website users through LinkedIn profile but the same solution can be used with small changes to support different providers. The user authentication is enabled through OAuth 2.0 protocol as LinkedIn was already deprecated the OAuth 1.0. Custom providers and helper classes are required to support the authentication flow as AEM OOTB don’t support the authentication with LinkedIn but supports twitter and Facebook. Feel free to give your feed back and changes on the provider bundle.

Friday, May 22, 2020



The Sling Content Distribution (SCD) module allows one to distribute Sling resources between different Sling instances. This can be used to distribute/sync content between AEM Author and Publishers for different scenarios.

In this tutorial let us see how to enable sling content distribution to distribute/sync content between AEM author and publishers.


  • allowing distribution of content among different Sling instances.
  • "distribution" - the ability of picking one or more resources on a certain Sling instance in order to copy and persist them onto another Sling instance. 
  • The Sling Content Distribution module is able to distribute content by:
"pushing" from Sling instance A to Sling instance B - Forward distribution
"pulling" from Sling instance B to Sling instance A - Reverse distribution
"synchronizing" Sling instances A and B via a (third) coordinating instance - Sync distribution
  • Agent - creating one or more packages of resources from the source(s), dispatching such packages to one or more queues and of processing such queued packages by persisting them into the target instance(s)
  • exporting - process of creating one or more packages, operation may either happen locally to the agent (the "push" scenario) or remotely (the "pull" scenario).
  • importing  - process of persisting one or more packages, operation may either happen locally (the "pull" scenario) or remotely (the "push" scenario). 



Forward distribution - Create/update content

Execute the below curl command in Author to Distribute content modification(new or modified) under /content/sample1 to publisher

curl -v -u admin:admin http://localhost:4502/libs/sling/distribution/services/agents/publish -d “action=ADD” -d “path=/content/sample1”

Forward distribution - Delete content

Execute the below curl command in Author to Distribute content deletions under /content/sample1 to publisher

curl -v -u admin:admin http://localhost:4502/libs/sling/distribution/services/agents/publish -d “action= DELETE” -d “path=/content/sample1”

Reverse distribution - Create/update content

Execute the below curl command in Publisher to add the distribution content to the Reverse Distribution Queue

curl -u admin:admin http://localhost:4503/libs/sling/distribution/services/agents/reverse -d “action=ADD” -d “path=/content/sample1”

 Execute the below curl command in Author to PULL the Distribution content from publisher
curl -u admin:admin http://localhost:4502/libs/sling/distribution/services/agents/publish-reverse -d “action=PULL”

Sync Distribution

Execute the below curl command in Publisher to add the distribution content to the Sync Distribution Queue

curl -u admin:admin http://localhost:4503/libs/sling/distribution/services/agents/reverse-pubsync -d "action=ADD" -d "path=/content/sample1"

Execute the below curl command in Author to PULL the content from publisher and distribute to the publishers in the farm other than the publisher initiated the content changes.(the changes wont't be persisted to Author)

curl -u admin:admin http://localhost:4502/libs/sling/distribution/services/agents/pubsync -d "action=PULL"


Trigger Factories to trigger the distribution on specific agent based on the events. The Triggers helps us to automate the distribution based on some events or specific interval.
  • DistributionEventDistributeDistributionTrigger - DistributionTrigger for chain distribution upon a certain DistributionEventTopics
  • JcrEventDistributionTrigger - A JCR observation based DistributionTrigger, trigger the distribution based on JCR events on specific nodes
  • PersistedJcrEventDistributionTrigger - DistributionTrigger that listens for certain events and persists them under a specific path in the repo
  • RemoteEventDistributionTrigger - DistributionTrigger to trigger distribution upon reception of server sent events on a certain URL
  • ResourceEventDistributionTrigger - DistributionTrigger for triggering a specific handler (e.g. agent) upon node / properties being changed under a certain path
  • ScheduledDistributionTrigger - DistributionTrigger to schedule distributions on a certain DistributionAgent, trigger the distribution based on specific interval.
Let us now see the details on Forward Distribution


  • A forward distribution setup allows one to transfer content from a source instance(Author) to a farm of target instances(Publish)
  • That is done by pushing the content from source to target


  • configure a local importer on publish
  • configure a "forward" agent on author pointing to the URL of the importer on publish




  • Configure Forward Agent in Author 
  • Configure Local Importer in Publisher
  • Enable Triggers – Scheduled/JCREvent
  • Test – CURL/UI/Triggers

Configure Forward Agent in Author

Configure a Forward Agent in Author that will distribute the content from Author to publishers importer endpoints based on the configuration.

Access http://localhost:4502/aem/start.html, Tools - Deployments - Distribution


Modify the default Forward Agent with name publish or create new agent of type Forward Distribution


Service Name - Service name is optional, if required create a service user with required permission
Change the lo level if required
Allowed Roots - Configure the content Roots the agent allowed to distribute
Importer Endpoints - List of publisher endpoints to which packages are sent,
http://localhost:4503/libs/sling/distribution/services/importers/default(default is the Local Importer name in publisher)

Ensure the Forward Agent Component is active


If the component is in un-satisfied state, verify the individual services in un-satisfied state and fix the configuration errors


Configure Local Importer in Publisher

Configure a local importer "Apache Sling Distribution Importer - Local Package Importer Factory" to receive the content from Author, publisher enabled with local importer  with the name "default" the same can be used 


If require define new local importer by accessing http://localhost:4503/system/console/configMgr/ (the importer end point in Author Forward Agent should be modified based on the Local Importer name)

Now the Forward Agent is ready to distribute the content to publishers


The agent can be disabled or paused from the configuration page also the Queue status and logs can be monitored

Let us test the forward distribution through curl command.

Modify the content under /content/we-retail in Author


Execute the below curl command in Author

curl -v -u admin:admin http://localhost:4502/libs/sling/distribution/services/agents/publish -d "action=ADD" -d "path=/content/we-retail/jcr:content"


Now the content is distributed to publish instance


The content distribution can also be triggered through Forward Agent Configuration Page


Let us now see how to automate the reverse distribution through triggers


Configure a JCR Event Trigger in Author to add the JCR changes under the configured path to the Forward Agent Queue.

Enter name - "forward-sync"
Path for which the changes are distributed - "/content/we-retail"
Service Name - Enter the service name with required access, i am using the default one for demo(socialpubsync-distributionService), the trigger will not be activated without configuring the service user
Use deep distribution - Enable this if want to distribute the subtree of the configured node on any events


Now link the trigger to the "Apache Sling Distribution Agent - Forward Agents Factory" configured with the name "publish" in the earlier step, Triggers - (name=forward-sync)


Now the content modification from Author under /content/we-retail node will be synced to publisher on modification.

The "Apache Sling Distribution Trigger - Scheduled Triggers Factory" can be configured to distribute the content on regular interval(link the trigger to the forward agent, either one of the - JCR or Scheduled trigger can be linked to agent)


The sling content distribution helps us to distribute the content between Author and publish instances. The Distribution triggers can be configured to automate the distribution of content between Author and Publishers. The Forward Distribution agent will help us to distribute the content from Author to Publishers. Let us continue with the Reverse Distribution in next tutorial.