1. What's this?

This is a scripts to integrate pgpool and heartbeat. Pgpool is a
replication server of PostgreSQL and makes reliability ,but pgpool
is a single point failure . To avoid this , run pgpool over heartbeat. 

This software contains:

-OCF style scripts for heartbeat, nesseary to run pgpool over heartbeat.
-A monitoring script for pgpool. 


2. Requirement

- heartbeat
 2.0+ required.  (Tested on 2.1.4 only)

- pgpool
 Tested on pgpool-II only , but It will work if 'show pool_status' command
 was supported.

- PostgreSQL client installation.
 'psql' also required for install hosts . 'postmaster' doesn't required.

- perl
 Need 5.0+ and GetOpt::Long.

3. using

After install this , pgpool can be used as heartbeat OCF-style 
resources. See Heartbeat site.

http://linux-ha.org/ConfiguringHeartbeat
http://linux-ha.org/ClusterInformationBase/


-ha.cf 
Set 'crm' to true , to use Cluster Resource Manager (CRM) to 
monitor pgpool .

--
crm true
--

Other setting is depends on your configuration.

-cib.xml

There's sample cib.xml file. 

* Modify 'node1' and 'node2' to your hostname, '192.168.0.3' to your floating IP address . 
* Put it to /var/lib/heartbeat/crm/ . Of cource this path is depends on your
heartbeat installation.

-----Sample cib.xml start

 <cib admin_epoch="0" epoch="0" num_updates="0">
   <configuration>
     <crm_config>
       <cluster_property_set id="cib-bootstrap-options">
         <attributes>
           <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="2.1.4-fb84f606a422 tip"/>
           <nvpair id="cib-bootstrap-options-default-resource-failure-stickiness" name="default-resource-failure-stickiness" value="0"/>
           <nvpair id="cib-bootstrap-options-default-resource-stickiness" name="default-resource-stickiness" value="100"/>
           <nvpair name="last-lrm-refresh" id="cib-bootstrap-options-last-lrm-refresh" value="1227682013"/>
           <nvpair id="cib-bootstrap-options-remove-after-stop" name="remove-after-stop" value="false"/>
         </attributes>
       </cluster_property_set>
     </crm_config>
     <nodes></nodes>
     <resources>
       <primitive id="resource_ip" class="ocf" type="IPaddr" provider="heartbeat">
         <meta_attributes id="resource_ip_meta_attrs">
           <attributes>
             <nvpair id="resource_ip_metaattr_target_role" name="target_role" value="started"/>
           </attributes>
         </meta_attributes>
         <instance_attributes id="resource_ip_instance_attrs">
           <attributes>
             <nvpair id="0fc14517-1d8a-40d1-a1db-941cf14d9490" name="ip" value="192.168.0.3"/>
             <nvpair id="7ef81de0-2fed-4fae-a517-ac0b96adba4e" name="cidr_netmask" value="23"/>
             <nvpair id="754d986c-bb77-4028-98fa-5a222854001e" name="nic" value="eth0"/>
           </attributes>
         </instance_attributes>
         <operations>
           <op id="op_ip_start" name="start" timeout="90" start_delay="0" disabled="false" role="Started"/>
           <op id="op_ip_stop" name="stop" timeout="100" start_delay="0" disabled="false" role="Started"/>
           <op id="op_ip_mon" name="monitor" interval="5s" timeout="20s" start_delay="1s" disabled="false" role="Started"/>
         </operations>
       </primitive>
       <primitive id="resource_pgpool2" class="ocf" type="pgpool" provider="heartbeat">
         <meta_attributes id="resource_pgpool2_meta_attrs">
           <attributes>
             <nvpair id="resource_pgpool2_metaattr_target_role" name="target_role" value="started"/>
           </attributes>
         </meta_attributes>
         <instance_attributes id="resource_pgpool2_instance_attrs">
           <attributes>
             <nvpair id="5adb33f4-6641-41a2-be3d-31264c579a67" name="pgpoolconf" value="/var/lib/pgsql/pool_ha/pgpool.conf"/>
             <nvpair id="db163efd-0e00-41f1-9a4b-dfa3c5b299e0" name="pcpconf" value="/var/lib/pgsql/pool_ha/pcp.conf"/>
             <nvpair id="9f69680a-ca9c-44b5-9644-d35e1b0286d4" name="hbaconf" value="/var/lib/pgsql/pool_ha/pool_hba.conf"/>
             <nvpair id="1fabaefd-716d-4f6c-8827-0cd79e8505ae" name="logfile" value="/var/lib/pgsql/pool_ha/pgpool.log"/>
             <nvpair id="ff4d7726-7bc1-4f3d-8d0e-8bc4aafafbf7" name="pidfile" value="/tmp/pgpool.pid"/>
           </attributes>
         </instance_attributes>
         <operations>
           <op id="op_pool_mon" name="monitor" interval="10" timeout="20" start_delay="1m"/>
           <op id="op_pool_start" name="start" timeout="20"/>
           <op id="op_pool_stop" name="stop" timeout="20"/>
         </operations>
       </primitive>
     </resources>
     <constraints>
       <rsc_colocation id="colocation_poolip" from="resource_pgpool2" to="resource_ip" score="INFINITY"/>
       <rsc_location id="ip_ping_const" rsc="resource_ip">
         <rule id="prefered_ip_ping_const" score="-INFINITY" boolean_op="or">
           <expression attribute="pingd" id="ip_ping_rule_ex1" operation="not_defined"/>
           <expression attribute="pingd" id="13b33648-e266-4567-899d-d83ed66d3107" operation="lte" value="0" type="number"/>
         </rule>
       </rsc_location>
       <rsc_location id="cli-prefer-resource_ip" rsc="resource_ip">
         <rule id="prefered_cli-prefer-resource_ip" score="10">
           <expression attribute="#uname" id="0742b4b3-d70c-4f11-945a-cfdea8cf5ff8" operation="eq" value="node1"/>
         </rule>
       </rsc_location>
     </constraints>
   </configuration>
 </cib>


----- Sample cib.xml end.


-----

-----
-pgpool.conf,pool_hba.conf,pcp.conf

Put pgpool.conf on pgpool's default config file path (e.g.
/usr/local/etc/pgpool.conf) , or specify following parameter to
cib.xml.

                      (Parameter)
pgpool.conf            pgpoolconf
pool_hba.conf          hbaconf        pgpool 3.2+ only
pcp.conf               pcpconf        pgpool-II only
log file               logfile        e.g. "| logger", "/var/log/pgpool.log"
pid file               pidfile		  e.g. "/var/run/pgpool.pid"
pgpool start option    options        e.g. "-d"

--

Following entries in pgpool.conf will be referrenced to monitor pgpool.

*port

Used to determine connecting port.

*health_check_user

used to connecting user and database. Make sure your PostgreSQL has same
user/role ,database and pg_hba.conf entry to connect from pgpool host by
"trust" authentication.



- Active - Active configuration.

Pgpool-ha 1.1+ supports multiple configuration , so it can support
active-active style configuration . 


4. Restriction

* Pgpool-ha don't control PostgreSQL. You have to start/stop it manually.
* In pgpool-ha, pgpool is monitored , but PostgreSQL is not . If your
  pgpool becomes something wrong but can accept SQL , pgpool-ha can't handle.
* If pgpool work but PostgreSQL is not avaliable , pgpool will failover.
* Pgpool-II is monitored by psql. Monitoring with pcp command is not
  supported yet. 

5. Licences

See ../COPYING file. 


