If you are going to build cluster configuration for Oracle SOA Suite 11g, you should provide shared disk drive for JMS persistence and other common resources. Official documentation recommends to use NFS resources. I can't explain why, but in our case access to NFS share was very unstable and totally unpredictable. Operating
system on cluster nodes eventually hangs and hadn't released resources. So we had to create test cluster without NFS shares and any NAS/SAN appliances.
We have three virtual machines, one is a Web tier with load balancer functionality and the other two are SOA 11g cluster nodes. Roles and links between servers you may see on the diagram below.
Let's assume that servers have follow names:
Prepare block device
We should start from preparing block device to publish as iSCSI target. Unfortunately we haven't any free block devices on this server and there are no ways to add new drive or recreate partitions on existing system, so let's create block device from a regular file on an existing file system of web server.
Configure iSCSI target
For our cluster we use Oracle Enterprise Linux 5 Update 5 64. All services for our system was already installed, if you are going to use another linux distributive, you shoul check existence of services and packages. For example for RHEL 4U8 you have to install OCFS22 packages and probably iscsi-target service.
Connect iSCSI clients
A this point we are going to enable iSCSI services and attach published device. We shoud do this set of commands on both cluster nodes
When new filesystem will be created we can do next step and configure Oracle Cluster File System.
Configure OCFS2 cluster
We are going to create new OCFS2 cluster on servers node1 and node2.
We can edit ocfs2 configuration files in /etc/ directory, or we can use utility ocfs2console.
Let's decide which servers have to have access to new shared resource. OCFS cluster is not related to any other clusters - SOA, Weblogic, RAC and any others. In general, OCFS Cluster is named list of hosts to share disk resource (-s) to create a new cluster let's use standard GUI tool - ocfs2console.
Form main menu select Cluster then Configure Nodes. Add all participant nodes to cluster members list. Save changes and select Cluster -> Propagate Configuration …If you have trusted relations between nodes OCFS configuraion will be copied to all members.
I use to set volume labels. It allows me to be device name agnostic (and Linux loves to change device names for some secret reasons). Use ocfs2console select menu Tools/Change Label … assign partition label (SOA).
Now we are ready to mount our device on all nodes and enables OS auto mount during startup. You should repeat this steps on all OCFS cluster participants.
#chown –R oracle:dba /u01/share.
Now you have your simple shared disk system and could continue with SOA configuration.
Conclusion
Usng this approach you can create shared storage resources quick and simple with no additional costs. By the way I do not recommend to use this configuration outside POC systems or demo stands when you have to use 2 laptops to demonstrate SOA Suite in action.
In nova days I'd prefer to create several virtual boxes (ie under VMWare Server) and shared virtual drive with clustered file system (the same OCFS would work for you for free)
This document was created several years ago in Russian so any comments, corrections are most welcome.
system on cluster nodes eventually hangs and hadn't released resources. So we had to create test cluster without NFS shares and any NAS/SAN appliances.
We have three virtual machines, one is a Web tier with load balancer functionality and the other two are SOA 11g cluster nodes. Roles and links between servers you may see on the diagram below.
Let's assume that servers have follow names:
- Web Tier - web
- Cluster nodes - node1 and node2
Prepare block device
We should start from preparing block device to publish as iSCSI target. Unfortunately we haven't any free block devices on this server and there are no ways to add new drive or recreate partitions on existing system, so let's create block device from a regular file on an existing file system of web server.
- Create an empty file with neccessary size
# dd if=/dev/zero of=/usr/shared-image bs=1M count=2048
Configure iSCSI target
For our cluster we use Oracle Enterprise Linux 5 Update 5 64. All services for our system was already installed, if you are going to use another linux distributive, you shoul check existence of services and packages. For example for RHEL 4U8 you have to install OCFS22 packages and probably iscsi-target service.
- Enable iscsi-target service
# chkconfig iscsi-target on - Start service for the first time
# service iscsi-target start - Stop service
# service iscsi-target stop - Edit file /etc/ietd.conf to add new device as follow
# Example iscsi target configuration#
# Everything until the first target definition belongs
# to the global configuration.Target iqn.2011-08.demo.soa11:storage.soa.share.ocfs2
# Users, who can access this target. The same rules as for discovery
# users apply here.
# Leave them alone if you don't want to use authentication.
#Incoming User joe secret
#Outgoing User jim 12charpasswd
# Logical Unit definition
# You must define one logical unit at least.# Block devices, regular files, LVM, and RAID can be offered
# to the initiators as a block device.
Lun 0 Path=/usr/shared-image,Type=fileio
# Alias name for this target
Alias WlsShare
# various iSCSI parameters
# (not all are used right now, see also iSCSI spec for details)
#MaxConnections 1
#InitialR2T Yes
#ImmediateData No
#MaxRecvDataSegmentLength 8192
#MaxXmitDataSegmentLength 8192
#MaxBurstLength 262144
#FirstBurstLength 65536
#DefaultTime2Wait 2
#DefaultTime2Retain 20
#MaxOutstandingR2T 8
#DataPDUInOrder Yes
#DataSequenceInOrder Yes
#ErrorRecoveryLevel 0
#HeaderDigest CRC32C,None
#DataDigest CRC32C,None
# various target parameters
#Wthreads 8 - Start service again and publish our device to clients
# service iscsi-target start
Connect iSCSI clients
A this point we are going to enable iSCSI services and attach published device. We shoud do this set of commands on both cluster nodes
- Add service iscsi to sartup sequence
# chkconfig iscsi on - Start service for the first time
# service iscsi start - Discover and attach published resouce on the web server (i.e. IP address - 192.168.1.10)
# iscsiadm -m discovery -t sendtargets -p 192.168.1.10 - Restart service to attach new device
# service iscsi restart - Check new drive availability
# fdisk -l
Disk /dev/sda: 146.1 GB, 146163105792 bytes
255 heads, 63 sectors/track, 17769 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 25 200781 83 Linux
/dev/sda2 26 2636 20972857+ 83 Linux
/dev/sda3 2637 4725 16779892+ 82 Linux swap / Solaris
/dev/sda4 4726 17769 104775930 5 Extended
/dev/sda5 4726 17769 104775898+ 83 Linux
Disk /dev/sdb: 1073 MB, 1073741824 bytes
34 heads, 61 sectors/track, 1011 cylinders
Units = cylinders of 2074 * 512 = 1061888 bytes
We ca see new disk device on nodes without any partitions and file systems.
Let's create new cluster file system. This operations we will do only on one server (the second one should see file system)
- Connect to disk
# fdisk /dev/sdb
Command (m for help):n
e extended
p primary partition (1-4) p
Partition number (1-4):1 - Make new partition from the first (1) sector to the last one and save the changes.
- Create file system on new partition
# mkfs –t ocfs2 /dev/sdb1
Configure OCFS2 cluster
We are going to create new OCFS2 cluster on servers node1 and node2.
We can edit ocfs2 configuration files in /etc/ directory, or we can use utility ocfs2console.
Let's decide which servers have to have access to new shared resource. OCFS cluster is not related to any other clusters - SOA, Weblogic, RAC and any others. In general, OCFS Cluster is named list of hosts to share disk resource (-s) to create a new cluster let's use standard GUI tool - ocfs2console.
I use to set volume labels. It allows me to be device name agnostic (and Linux loves to change device names for some secret reasons). Use ocfs2console select menu Tools/Change Label … assign partition label (SOA).
Now we are ready to mount our device on all nodes and enables OS auto mount during startup. You should repeat this steps on all OCFS cluster participants.
- Enable OCFS service :
#/etc/init.d/o2cb enable - Create mount point (obviously should be same on all resources)
# mkdir /u01/share - Add entry to file system list /etc/fstab:
LABEL=SOA /u01/share ocfs2 defaults 1 2 - Mount all devices by default:
#mount -a
#mount
/dev/sda2 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sda5 on /u01 type ext3 (rw)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
none on /var/lib/xenstored type tmpfs (rw)
configfs on /sys/kernel/config type configfs (rw)
ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
/dev/sdb1 on /u01/share type ocfs2 (rw,_netdev,heartbeat=local)
#chown –R oracle:dba /u01/share.
Now you have your simple shared disk system and could continue with SOA configuration.
Conclusion
Usng this approach you can create shared storage resources quick and simple with no additional costs. By the way I do not recommend to use this configuration outside POC systems or demo stands when you have to use 2 laptops to demonstrate SOA Suite in action.
In nova days I'd prefer to create several virtual boxes (ie under VMWare Server) and shared virtual drive with clustered file system (the same OCFS would work for you for free)
This document was created several years ago in Russian so any comments, corrections are most welcome.
2 comments:
Do you know what you write here is a complete crap. You do not have experience.
Sure, man. You're the wisest one.
Post a Comment