Tuesday, February 28, 2012

Applying the JRF to the weblogic cluster through WLST


Applying the JRF to the weblogic cluster through WLST

Java Required Files (JRF) consists of those components not included in the WebLogic Server installation that provide common functionality for Oracle business applications and application frameworks.

It consists of a number of independently developed libraries and applications that are deployed into a common location. The following components are considered part of Java Required Files: Oracle Application Development Framework, Oracle Fusion Middleware Audit Framework, Dynamic Monitoring Service, Fabric Common, HTTP Client, Infrastructure Security, Java Object Cache, JMX Framework, JPS, logging, MDS, OJSP.Next, Oracle Web Services, Oracle Web Services Manager, Oracle TopLink, UCP, XDK.


applyJRF


Configures a Managed Server or cluster with Java Required Files (JRF). Managed Servers that are added by product templates during the template extension process do not need to be explicitly configured with JRF using this command.

Use the applyJRF command when additional Managed Servers or clusters are added to a domain after it is initially extended with a product template.

The applyJRF command is required any time you add a Managed Server to a JRF-only domain, or if you add a Managed Server that has been configured for JRF to a domain that contains other Oracle products.


WLST Script


When we are creating the core weblogic cluster JRF is not applied by default, the below WLST script will help us to apply the JRF for the weblogic cluster.

WLST Script


applyJRFToDomain.py


The below script applies the JRF to the cluster with Cluster1 - modify cluster name based on your requirement, indivdual Managed Servers also applied the JRF

Script

https://github.com/techforum-repo/youttubedata/blob/master/scripts/wlst/applyJRFToDomain.py

ORACLE_COMMON_HOME\common\bin\wlst.cmd applyJRFToDomain.py

Now the JRF is applied to the cluster "Cluster1"

Friday, February 24, 2012

Oracle SOA Suite 11g - Error retrieving NXSD encoding exception in MQ adapter

Oracle SOA Suite 11g - Error retrieving NXSD encoding exception in MQ adapter:

The following exception has been thrown whenever a composite enqueue the message to the Queue through the MQ Adapter.
MQSeriesAdapter PropagateOrderRequest:OrderRequest [ Enqueue_ptt::Enqueue(Request) ] Error retrieving NXSD encoding...>>
<11-Jan-2012 12:58:16 o'clock GMT> <Notice> <Stdout> <BEA-000000> <<11-Jan-2012 12:58:16 o'clock GMT> <Error> <oracle.soa.adapter> <BEA-000000> <MQSeriesAdapter CCRMOM_PropagateOrderRequest:OrderRequest [ Enqueue_ptt::Enqueue(Request) ]
java.lang.NullPointerException
This is a Runtime exception, even after receiving the above exception the messages are successfully enqueued.

The Oracle SOA Suite version in which the issue has been observed is 11.1.1.5.0

Solution to suppress this exception:

               We have applied the patch 13718139 as suggested by oracle and it resolved the issue.
               Download the patch from metalink by specifying the patch number as 13718139.

Saturday, February 18, 2012

Oracle SOA Suite 11g - Callback not reaching the calling service in a Clustered environment

Oracle SOA Suite 11g - Callback not reaching the calling service in a Clustered environment:


In Oracle SOA Suite 10g, we had a scenario like shown in the below diagram.

Process A


Process A calls Process B and waits for the call back; after receiving the call back; The Process A calls again the same Process B and waits for the second call back.
This flow is working fine in 10g but after migrating to 11g the second call back is not reaching the Process A.
In the SOA log file we could able to see the below error message -
an unhandled exception has been thrown in the Collaxa Cube systemr; exception reported is: "javax.persistence.PersistenceException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.1.3.v20110304-r9073): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: java.sql.SQLIntegrityConstraintViolationException: ORA-00001: unique constraint (SOA_SOAINFRA.AT_PK) violated

Error Code: 1
Call: INSERT INTO AUDIT_TRAIL (CIKEY, COUNT_ID, NUM_OF_EVENTS, BLOCK_USIZE, CI_PARTITION_DATE, BLOCK_CSIZE, BLOCK, LOG) VALUES (?, ?, ?, ?, ?, ?, ?, ?)
bind => [851450, 5, 0, 118110, 2012-01-12 16:08:04.063, 5048, 3, [B@69157b01]
Query: InsertObjectQuery(com.collaxa.cube.persistence.dto.AuditTrail@494ca627)
at org.eclipse.persistence.internal.jpa.EntityManagerImpl.flush(EntityManagerImpl.java:744)
After the detail analysis and the Oracle SR, the root cause is “Audit Trail caching issue is causing the process not to get to complete due to the exception while doing the second call back”.
Let’s say in a Cluster with four nodes Process A first created on node 1 and carried on and called first call to Process B and dehydrates. Assume that second call hit a node other than node 1, let’s say node 4. Node 4 carries on doing the work and does the second call to Process B and dehydrates. If the second call back hits Node 1, rather than getting the latest audit trail from database, it’s using the cache and trying to update the audit trial with an id which node 4 already used. That’s throwing an exception and rolling back. For that reason the call back is not getting delivered.
As a temporary solution by switching of the audit trail for Process A, it doesn’t try to insert the data into audit trail and the transaction won’t get roll back and the call has reached successfully.
As a permanent solution, Oracle has provided the patch, the same can be download from the below path.
The issue identified in the SOA Suite version – 11.1.1.5.0

Sunday, February 5, 2012

Currently logged in user is not authorized to modify SOA metadata

Currently logged in user is not authorized to modify SOA metadata

We must have the SOADesigner application role to access Oracle SOA Suite Composer metadata. By default, all the users with Oracle Enterprise Manager Fusion Middleware Control administrator privileges have this role. If you log in to Oracle SOA Composer without this role, you see the following message:
"Currently logged in user is not authorized to modify SOA metadata".

For information about adding the SOADesigner application role to users without administrator privileges, see Oracle Fusion Middleware Administrator's Guide for Oracle SOA Suite and Oracle BPM Suite.
If we are adding the SOADesigner role to the user, the user can edit the DVM’s and the rules.I could not able to find out read only access for the SOAComposer application.

Friday, February 3, 2012

WLST script to create JDBC Multi Data Source in weblogic server

Creating JDBC Multi Data Source through WLST in weblogic server

This tutorial explains the approach to create JDBC Multi data source through WLST script in weblogic

Refer the below URL for details on JDBC Multi data source.

https://www.albinsblog.com/2012/02/jdbc-multi-data-sources-in-weblogic.html

WLST Script


The below WLST script creates the JDBC Multi data source with enabled configurations

CreateMultiDataSource.py

adminURL='t3://localhost:7001'
adminUserName='weblogic'
adminPassword='weblogic1'
connect(adminUserName, adminPassword, adminURL)
edit()
startEdit()
jdbcSystemResource = create("MS1","JDBCSystemResource")
jdbcResource = jdbcSystemResource.getJDBCResource()
jdbcResource.setName("MS1")
dsParams = jdbcResource.getJDBCDataSourceParams()
jndiName='jdbc/MS1'
dsParams.setJNDINames([jndiName])
dsParams.setAlgorithmType('Failover')
dsParams.setDataSourceList('DS1,DS2')
dsParams.setFailoverRequestIfBusy(true)
jdbcSystemResource.addTarget(getMBean('Servers/AdminServer'))
print('MDS1 created successfully...')
save()
activate()
disconnect()



Script

https://github.com/techforum-repo/youttubedata/blob/master/scripts/wlst/CreateMultiDataSource.py

Before executing the script, change the configurations as required.

The individual data sources(DS1 & DS2) should be created before executing the script




Execute the script — <<Oracle_Home>>\oracle_common\common\bin\wlst.cmd CreateMultiDataSource.py

Now the JDBC Multi source is created with required configurations







JDBC Multi Data Sources in weblogic server


JDBC Multi Data Sources in weblogic server:

A multi data source is an abstraction around a group of data sources that provides load balancing or failover processing between the data sources associated with the multi data source. Multi data sources are bound to the JNDI tree or local application context just like data sources are bound to the JNDI tree. The Multi Data Source can be used in the same way as we use a Data Source.


When an application requests a connection, the Multi Data Source determines which data source will provide a connection based on the selected algorithm.
Create two or more data sources, and then create a Multi Data Source and assign data sources to the Multi Data Source.



Configurations for the Multi Data Source.

Algorithm Type


Load-Balancing


Connection requests to a load-balancing multi data source are served from any data source in the list. The multi data source selects data sources to use to satisfy connection requests using a round-robin scheme. When the multi data source provides a connection, it selects a connection from the data source listed just after the last data source that was used to provide a connection. Multi data sources that use the Load Balancing algorithm also fail over to the next data source in the list if a database connection test fails and the connection cannot be replaced, or if the data source is suspended.

Failover


The Failover algorithm provides an ordered list of data sources to use to satisfy connection requests. Normally, every connection request to this kind of multi data source is served by the first data source in the list. If a database connection test fails and the connection cannot be replaced, or if the data source is suspended, a connection is sought sequentially from the next data source on the list.

FailoverRequestIfBusy


With the Failover algorithm, this attribute enables failover when all connections in a data source are in use.

TestFrequencySeconds


This attribute controls the frequency at which Weblogic Server checks the health of data sources previously marked as unhealthy to see if connections can be recreated and if the data source can be re-enabled.

Wednesday, February 1, 2012

Oracle SOA Suite 11g - Important SQL queries to get the details about composites from SOAINFRA schema


Oracle SOA Suite 11g - Important SQL queries to get the details about composites from SOAINFRA schema

Query to find the execution time of the BPEL instances:-

SELECT * FROM (SELECT COMPOSITE_NAME COMPOSITENAME,A.CMPST_ID COMPOSITE_INSTANCE_ID,creation_date BEGIN_TIME,modify_date END_TIME ,(extract(day from modify_date - creation_date)*86400+ extract(hour from modify_date - creation_date)*3600+ extract(minute from modify_date - creation_date)*60+ extract(second from modify_date - creation_date)) duration_in_second,A.* FROM cube_instance A where state = 5 and CREATION_date BETWEEN TO_DATE('19/01/2012 00:58:00','dd/mm/yyyy HH24:MI:SS')AND TO_DATE('19/01/2012 23:59:59','dd/mm/yyyy HH24:MI:SS') AND COMPOSITE_NAME IN (<<Composite Name>>)) ORDER BY COMPOSITE_NAME,duration_in_second DESC

Query to find the execution time of the Mediator instances:

SELECT * FROM (SELECT COMPONENT_NAME COMPONENTNAME,A.COMPOSITE_INSTANCE_ID INSTANCE_ID,created_time BEGIN_TIME,updated_time END_TIME ,(extract(day from updated_time - created_time)*86400+ extract(hour from updated_time - created_time)*3600+ extract(minute from updated_time - created_time)*60+ extract(second from updated_time - created_time)) duration_in_second,A.* FROM mediator_instance A where CREATED_TIME BETWEEN TO_DATE('30/01/2012 00:58:00','dd/mm/yyyy HH24:MI:SS')AND TO_DATE('31/01/2012 23:59:59','dd/mm/yyyy HH24:MI:SS') AND A.COMPONENT_NAME like '%<<Meditor Composite Name>>%') ORDER BY COMPONENT_NAME,duration_in_second DESC

Query to find the count of Composite instances created on particular time:-
BPEL instances:

select * from (select count(COMPONENT_NAME) count,COMPONENT_NAME from CUBE_INSTANCE where CREATION_DATE BETWEEN TO_DATE('30/01/2012 12:58:00','dd/mm/yyyy HH24:MI:SS')AND TO_DATE('31/01/2012 23:00:00','dd/mm/yyyy HH24:MI:SS') /*and component_state<5*/ group by COMPONENT_NAME ) order by count desc

Mediator Instances:

select * from (select count(COMPONENT_NAME) count,COMPONENT_NAME from MEDIATOR_INSTANCE where CREATED_TIME BETWEEN TO_DATE('30/01/2012 12:58:00','dd/mm/yyyy HH24:MI:SS')AND TO_DATE('31/01/2012 23:00:00','dd/mm/yyyy HH24:MI:SS') group by COMPONENT_NAME ) order by count desc

Getting the count of instances in different state for the BPEL Components:

select composite_name,count(*),DECODE(cube_instance.STATE,

0, 'STATE_INITIATED',

1, 'STATE_OPEN_RUNNING',

2, 'STATE_OPEN_SUSPENDED',

3, 'STATE_OPEN_FAULTED',

4, 'STATE_CLOSED_PENDING_CANCEL',

5, 'STATE_CLOSED_COMPLETED',

6, 'STATE_CLOSED_FAULTED',

7, 'STATE_CLOSED_CANCELLED',

8, 'STATE_CLOSED_ABORTED',

9, 'STATE_CLOSED_STALE',

10,'STATE_CLOSED_ROLLED_BACK','unknown') state from CUBE_INSTANCE group by state,composite_name order by composite_name;

Getting the count of instances in different state for the Mediator Components:

select COMPONENT_NAME,count(*),DECODE(mediator_instance.COMPONENT_STATE,

0, 'STATE_CLOSED_COMPLETED',

2, 'STATE_OPEN_FAULTED',

4,'STATE_RECOVERY_REQUIRED',

8, 'STATE_OPEN_RUNNING',

16, 'STATE_CLOSED_STALE',

48, 'STATE_TERMINATED',

50, 'STATE_TERMINATED','unknown') STATE from mediator_instance group by component_state,COMPONENT_NAME order by COMPONENT_NAME;

Querying the call back message status from dlv_message table:

Error to deliver the call back to the receive activity:

select count(*) from dlv_message where dlv_type = 2 and state=1

Callback arrived but did not get resolved:

select count(*) from dlv_message where dlv_type = 2 and state=0

Callback successfully delivered to receive:

select count(*) from dlv_message where dlv_type = 2 and state=2

Querying recoverable invoke messages :

select * from dlv_message where dlv_type = 1 and state = 0;

To check for rejected messages for a specific composite:

select count(*) from rejected_message where composite_dn like '%<<Composite Name>>%';