Tuesday, May 31, 2016

Integration of Jenkins server with GIT Hub

We are having multiple blogs around this but could not able to find the complete information on this.

Thought of sharing this for my reference too. This post will explain how to integrate Jenkins in Linux with private GIT Hub.

Configuring the GIT plugin in Jenkins:

Jenkins à Manage Jenkins à Manage Plugins
Make sure Jenkins GIT Client Plugin and Jenkins GIT plugins are installed.


Configure GIT in server:

Login to Jenkins server via puty
Execute the following command - sudo yum install git-all

Configure GIT executable path:

Jenkins àMange JenkinsàConfigure System
Enter the git executable path(git installed to the server in previous step)
My case the path is /usr/bin/git, change accordingly


SSH Configuration:

Login to Jenkins server via putty
Execute the following commad - sudo -u jenkins ssh-keygen -t rsa
Enter the details accordingly, Passphrase is optional if required then enter some string.
This will create private and public keys under /var/lib/jenkins/.ssh (default files are id_rsa and id_rsa.pub)

Configuring Credential:

Jenkins àCredentials
Click on add a domain – Enter some name(better git hub host name)
Click on Add credential
Enter the user name as Jenkins(the user that runs the jenkins)
Select From the Jenkins master ~/.ssh


Configure deployment key in GIT:

Login to GIT and click on Profile Settings
Click on SSH Keys – Enter the content of id_rsa.pub(cat id_rsa.pub) to Key filed
Click on add key.



Configure the Job:

Configure the GIT URL and select the credential name created in the previous step
Specify the branch to be build.





Saturday, April 30, 2016

Only a type can be imported. xxxxxx resolves to a package - AEM/Adobe CQ5

We were receiving the following error in run time due to this the pages got broken. The same scenario was working fine earlier and broken without any code or config change to the server.

We have tried multiple options as follows without any luck
  • Re-deploying the code to affected server
  • Restarting the server
  • Removing run time class files - /var/classes/org/apache
  • Recompile/Clear the JSP classes - http://www.albinsblog.com/2016/04/how-to-clearrecompile-jsp-classes-in-AEM-6.1.html
We could not able to identify the root cause but as a work around manually added a space to the component and saved through CRXDE and that helped to fix the issue.



Friday, April 29, 2016

How to disable the online compaction - AEM/Adobe CQ5

AEM 6.1 will have daily job scheduled and that will perform the online compaction.

Some times this may cause performance issue when the online compaction takes more time to complete due to load or authoring.

As a solution the online compaction job can be disabled and offline compaction can be used whenever required based on the maintenance window.

Steps to disable online compaction:
  •  Go to CRXDE path - /libs/granite/operations/config/maintenance/granite:daily/granite:RevisionGC
  • Changed run mode from crx3 to crx3-disabled
  • Save



java.util.concurrent.ExecutionException: hudson.util.IOException2: Failed to create a temporary file in xxxxx/figerprints/2c/73

We were receiving the following exception while executing the deployment jobs from Jenkins

java.util.concurrent.ExecutionException: hudson.util.IOException2: Failed to create a temporary file in xxxxx/figerprints/2c/73



Based on the analysis, the issue happened due to in sufficient space in the server.


The deployment got success after freeing some space in the server.



Saturday, April 23, 2016

How to modify the scheduler time for daily and weekly maintenance jobs - AEM 6.1?

This post will explain the steps to modify the scheduler time for daily and weekly maintenance jobs in AEM 6.1.

To change the scheduled time of the jobs:

Login to CRXDE and navigate to /libs/granite/operations/config/maintenance/granite:weekly or /libs/granite/operations/config/maintenance/granite:daily

Change the windowStartTime and windowEndTime accordingly to change the schedule time of the job.




Click on the job category, this will display the status of the current job and the next schedule time.
We will be able to see the message related to the job failure and also will be able to start the Job immediately.




To stop the scheduled jobs – Login to OSGI console and stop the bundle -“Granite Maintenance OAK”






Friday, April 15, 2016

How to manage the i18 translation in Adobe CQ5 (AEM)

This post will explain how to manage the i18 translation in Adobe CQ5 (AEM)

Follow the below URL to create the i18 language nodes for first time.

The translator.html can be used to manage the i18 translation for different languages.


Here you can add/delete modify keys and language values.

Whenever we perform any operation the same can be viewed from CRXDE – under the language node created as part of the first step.

The i18 keys can be replicated to publisher in different ways as mentioned below.
·         Manually creating the i18 nodes in publisher
·         Creating the package from author and deploying it to publisher
·         Replicating the nodes through tree activation


Select the node that needs to be replicated and activate
Selected nodes will be replicated to publisher



Thursday, April 7, 2016

How to restrict crawling/indexing of specific URLs in Adobe Search and Promote(Adobe S&P)

Some cases we may need to index specific type of URLs from the website and excluding all other URLs available.

The URL Masks can be used in Adobe S&P to achieve this.

URL mask will help us to define the rules to include or exclude the specific URLs during the indesing.

We will be able to define include and exclude rules

Include - pattern that specifies the URLs will be indexed
Exclude - pattern that specifies the URLs will be excluded from the indexing.

To index the URLs that is starting with mask.


The crawler will index all the URLs that starts with https://server.com/content/doc

To index the URLs that is in the particular format.

This crawler will index all the URLs matching with  - https://server.com/content/doc/*.html?id=*

e.g. https://server.com/content/doc/sample.html?id=123

Regex can be used to match the URLs for indexing


This crawler will index all the URLs matching with the regex ^.*/content/doc/.*\.html$
e.g. https://server.com/content/doc/sample.html



Contact Form

Name

Email *

Message *