Replication service is an integral part of IR Server in RK7. The following is installed along with it:

RPL8_Lib.dll, the replication service itself and DB_RPL8.ucs , the replication mapping file (it is created for each UCS software product by the developer of the software product).

Before you set up replication make sure: 


Warning! Running replication service is not equal to a running a replication. Service is an engine. You should launch a replication process manually. Once received such a command, the replication service will continuously support the replication process running and stop only in case of critical errors that are being written to the service log file. In the future the replication setup program must be run in a single copy, i.e. if it was run on the main node, it is not allowed to run another copy of the program on the local nodes.  

Replication setup on the main node

All replication nodes can be setup on one computer when the setup program has been run. 

1. Run RPL_Manager.exe (Figure 1). 

Figure 1. RPL_Manager.exe
2. After connection to Common Server, expend a list of services registered on the server. Open a replication service of Master node.  Switch to the Settings tab. Click the Load settings from file button to load a mapping from DB_RPL8.ucs file (Figure 2).

Figure 2 . Mapping load

3. After mapping load, switch to the Install tab and configure a replication node and a database connection (Figure 3).

Figure 3. Master node and database connection setting up

Type  - MASTER
Name - specified name of the node
Host - service IP address (it is recommended to set 0.0.0.0 for the Master node to make the service available for all network interfaces of the host). Port, protocol and other settings should be set by default in this section.

Count transaction sending parameter enables to control a size of the data packet. 1000 transactions is set by default. If the transmission speed of a network allows fast transfer (1-5 sec) of big size files (3-10 MB), you can increase this value. You can increase this value up to 2000-3000 transactions for simple databases which don’t contain large binary data objects (BLOB fields) in the tables. If the transmission speed of a network is low or database contains filled BLOB fields, it is recommended to decrease the value to 100-200 transactions.

In the database connection area you can set connection with the database of the main replication node. Enter any unique connection name, DB server access details and set the database connection string. To create the connection string easily, click the Generate button and choose the needed server and the database. If you cannot connect the DB server from the computer where you make settings, we recommend you create a UDL file with a connection to the necessary database on the DB server and copy a result connection string from the file.

The read/write connection pool settings set maximum number of DB query threads in replication service. You should not specify less than 10, because some of the threads are used for inner needs, and the remains is used for creating the DB connections which load the packets.

On the main node you should specify a number of threads using a formula:

10+([number of local nodes]*2).

Smaller values may cause a packet queue on the main node; the server will fail to load them. In the systems with a large number of local nodes, the number of write threads can be increased in 2-3 times than the value obtained from the above formula. If the system performance is high enough this will help to increase the packet load speed. You should avoid too large values as excessive amount of threads results in a large load of PC resources by the DB server. See Figure 3 for an example of proper settings of the main node.

4. After creating the settings, click Install, save the settings and launch replication. In the Event Log in the bottom of the window you will see a record on successful launch of the node replication (Figure 4). (see Figure 4).

Figure 4. Record on successful launch of the node replication 

After launching replication with Extract pure mapping button on the main node (see Figure 5) you should save the settings of the main node to a file (a standard file selection dialog opens). These settings are unique for the main node and will be used for each new local node to load the main node settings. 

Figure 5. Extract pure mapping

            Now the main node replication setup is finished and you can set up the local nodes.

Replication setup on the local node

Setting the local replication node in the RPL_Manager is similar to the main one, except for a few moments. You should make settings in RPL_Manager launched on the main replication node.
To set up replication: 
1. Open the required local node. If it is registered on the other CS, you should add it similar to the main node (Figure 1). For this purpose you should download it from the main node settings file which was saved after launch of the main node replication by clicking Extract pure mapping (Figure 6). Do not download it from the "Original mapping" file received from the developer.

After settings load, switch to the Install tab and configure a replication node properties and a access to the DB.

Type  - SLAVE

Server name - specified the name of the node from the Step 1.

Master server name – name of the main replication node set up at the Step 1.

Host - the main replication node IP address.

Port, protocol and other settings should be set by default in this section. If necessary specify amount of transactions in the packet, adapted to the network conditions. If necessary, you can also specify №№ of start transaction in case the main node DB archive is being used on the local node.
2. In the database connection area set connection with the database of the local replication node. Enter any unique connection name, DB server access details and set the database connection string. 10 is a number of connections which is usually enough for read and write DB pool. See Figure 6 for an example of proper settings.

Figure 6. Local replication node and DB connection settings

3. After creating the settings, click Install, save the settings and launch replication.
4. In the Event Log in the bottom of the window you will see a record on successful launch of the node replication (see Figure 4).

Now the local node replication setup is finished and you can set up the next local node.

3 Replication restart and replication activity monitoring 


To restart a replication configured on nodes you do not need to load settings from the mapping file or the main replication node configuration file.
All settings saved in the DB of the replication node and can be loaded by clicking the Load Settings from server button (Figure 7). 

Figure 7 - Settings load

After the settings loaded click the Start button in the top of the Replication Node configuration window. 
Attention! First you have to launch the main replication node, after that run the local nodes in any sequence. 

In case of RPL8_SERVICE failure or restart (for example, when the replication node computer is rebooted), the replication process is launched automatically. RPL_Manager enables to run the group launch and stop replication. To do this, choose Group settings in the button bar, select required nodes in the opened window and click Start or Stop.
Replication activity can be controlled by web-servers built-in in Common Server and replication service.

For CS control go to http:\\[Common Server IP address ]:[Common Server port]/?info
To see the replication nodes, in the list of the nodes choose in any order an occurrence of the required replication node and follow the link to the web-server of the replication node service (Figure 8).

Figure 8 - Replication activity control

Figure 9 shows a replication service status page:

Figure 9 - Replication service info page

If the replication is running, a green caption «State - This server is running» is displayed, if it is stopped then red caption «State - This server is stopped» is displayed. The Time watchdog parameter shows a self-diagnostic check of the service. In normal condition the caption is also green. In case of service threads hang, time left until the service restarts is displayed in red. 

Timeout for a non-responding thread is set in minutes by AlarmTime(m) parameter in [FS_TYPE] section. When the timeout elapses, the thread is considered hung and the service is restarted. By default, the timeout is 15 minutes.

Number of the last transmitted packet and last operation date/time, which can be used to check the availability of the replication, are displayed on the web-server of the main node replication service (Figure 9).

Analyze each pair “main node (MASTER)-local node (SLAVE)”.

In this example, the current packet number and time of the last operation for the local node allow to understand a number of packets transmitted (and recorded in the database) from the main to the local node by the specific point of time.

A current packet number and last operation time for the main node allow to understand how many packets transmitted (and recorded in the database) from the local to the main node by the specific point of time.

Replication Service log files are located in the LOGS folder, which is a sub-folder of Replication Service installation folder.

If the packet number and the date of the changes in the web-server of the main replication node do not appear or do not change, check the most recently created file in that folder.