Introduction
The purpose of this guide is to provide information about the configurations associated with the system setup for transaction processing through the gateway.
Intended Audience
This guide will be useful for administrative users, such as system administrators and gateway managers, that need to set up the system for meeting particular needs.
Data Querying Module
Data Querying Module (DQM) is a mechanism that allows execution of a limited number of SQL queries against underline database using UniPay’s user interface. Generally, it works using a concept similar to the one that the entire user interface is built on. However, it allows execution of a raw query with a limited SQL syntax.
DQM serves two purposes. Firstly, it simplifies support around merchant, allowing execution of custom queries and export of the result data in excel, which helps support team to respond to the queries quicker. Secondly, it simplifies management of the system. It allows technical administrators with an appropriate level of access to query database directly even in those cases when no direct access to the database is available. Therefore, DQM shortens resolution time for various issues and reduces the amount of communication needed to resolve a particular issue.
LQM Work
For convenience and simplicity of receiving data from the log files, the separate form was created on the UniPay interface - the LQM form, so that anyone who has System user role can log in and pull out any data needed.
The LQM form is integrated with ElasticSearch and Kibana.
The more detailed info regarding ElasticSearch you can find by following the link : https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html
LQM Form
To open LQM Form, go to Perspective -> Administration -> System-> IQ -> choose LQM.
1.
Execute button is used for the search (press after entering the necessary query parameters).
2.
Date - After entering the time frames, click
Set - this will automatically add them to the input field (number 7).
3.
Template - this button is not currently in use.
4. In the
Logger Name dropdown, you can select an automatic queries.
5. The
Field Name helps make the query more accurate by adding the desired field, for example, the host or the entire message you are looking for .
6. Dropdown
Time Period allows you to set the time frame for the search: you can select the last 5, 15, 30 minutes, the last 1, 3 or 24 hours, yesterday or the last 5 days.
7. Search request input field.
8.
Date input field. You can enter manually or use the built-in calendar.
9.
Time input field. When you click on it, the current time will be set. It can be changed manually or use the up and down arrows.
10. The
Timestamp column displays the logging time.
11. The
Host column shows which host is affected by this log.
12. The
Short Message column will contain the required information.
Please note! If you click on the arrows near the column names, the data is automatically sorted in ascending order; after the second press, in the descending order.
To make the simplest request:
- select the Time Period(number 6 in the screenshot) -> Yesterday;
- press the Set button (number 2 in the screenshot);
- in the field name dropdown (number 5), select full_message;
- in the input field (number 7) in the line “full_message:” insert the following: 'Pingdom test is successfull' (required in single quotes);
- press the Execute button (number 1).
Here is the request:
Here are the results:
1. By clicking on this symbol, you can save all the results in a .csv file.
2. By clicking on the “magnifying glass” symbol, you will see detailed information about this log (screenshot below).
3. The number of entries per page and total (default: 100) are shown.
Building search queries
Let's look at automatic queries in the Logger name:
- API:
- Realtime (Info) displays the status at the moment.
- Realtime (Debug) should be enabled for the appropriate accounts in Settings. Debug will display more information than Info: first a request will be displayed, then the server response with two separate entries, accordingly.
- TMS (Terminal Management System) (Info) shows terminal operations.
- TMS (Error) shows errors that occurred while working with terminals.
- Onboarding - global merchant registration.
- Email Notification - information about sent emails from a merchant to a client.
- Error shows all application errors for all UniPay and UniBroker nodes for the specified period.
- Hibernate Data Service - Information on database communications that took more than 5 seconds. The main goal is server performance analysis.
- Pingdom is a mechanism that checks servers. Temporarily not functioning.
- Reports - collects logs by nodes (if there are problems with reports).
- Terminal Cloud shows logs about client requests to the terminal.
- User Access - general information about system requests and users who logged in.
Please note! If you select another Logger name, you must delete the previous one! It will not disappear from the input field before you delete it.
Using the Field Name, you can enter the following queries:
- host - search for the specific host.
- full_message - request for a full message. If you are looking for the whole phrase, you must use single quotes. If just words or numbers, then no other signs should be used.
- severity - logging level.
- SourceClassName is temporarily not available option.
- Other and all its components are needed for a more precise search of queries.
DQM Work
DQM works as any other form in the system. However, it uses a separate dedicated read-only database connection and allows the direct execution of SQL queries with a limited syntax. The result can be viewed right away on the user interface or downloaded in CSV format for subsequent use.
There are security measures that are put in place to ensure that queries are verified before execution and the mechanism remains PCI compliant:
1. DQM ensures that there is only one statement provided. If more than one statement is provided, the query is rejected.
2. DQM verifies that the type of statement is SELECT. Everything else will be rejected.
3. DQM verifies that SELECT query does not contain the following options:
- HIGH_PRIORITY
- SQL_CACHE
- SQL_BUFFER_RESULT
- FOR UPDATE
- LOCK IN SHARE MODE
- INSERT INTO
4. DQM allows to retrieve and display on the user interface at most 500 rows (imposes the limit of 500). It also provides an option to download result as CSV file. In this case, up to 10,000 rows can be retrieved.
5. Any attempts to reference any databases other than unipay database are disallowed. Any direct or indirect references (through joins) to the following list of tables are disallowed:
- unipay.ca_key
- unipay.key_registry
- unipay.resource_content
- unipay.iapp_data_vault
- unipay.iapp_vault_content
DQM can be accessed by the gateway users who have System 2 security level. DQM does not take into account data access policy and assumes that the user have access to all portfolios, resellers and merchants within the system.
6. To skip entering queries manually, the DQM form allows to use different templates to run queries. The templates are divided in two groups:
- System Queries - allows to obtain templates for running the following system queries:
- Process List - allows to obtain templates for running the queries associated with the Show Processlist queries for DB users with different privileges:
- Show UniPay Processlist - allows to obtain a list of all active threads available for the unipayw DB user. This list doesn't include processes, which are currently in the Sleep status
- Show UniPay Full Processlist - allows to obtain a list of all active threads available for the unipayw DB user
- Show DQM Processlist - allows to obtain a list of all active threads available for the unipayr DB user
SSL Certificate Update/Replacement Process
Overview
Companies that process payments must ensure the confidentiality of their users’ sensitive data including their contact information, credit card/account numbers, logins, passwords, emails, etc. One of the most proven ways to prevent sensitive data from being compromised is to secure your connections with an SSL certificate.
For a better understanding of the process of SSL certificate installation/replacement, you can review the information below:
An
SSL certificate is a text file with encrypted data that is installed on the server so that all sensitive communications between the gateway and a user are secured/encrypted. SSL certificates digitally bind a cryptographic key to a domain name, server name or hostname, an organizational identity (i.e. company name) and location of the gateway owner. When installed on a web server, it activates the padlock and the HTTPS protocol, allowing a secure connection from the gateway to a user’s browser.
Before purchasing your SSL certificate, make sure that you have chosen the
appropriate certificate type based on your needs:
- Domain Validated (DV)
- Organization Validated (OV)
- Extended Validation (EV)
All SSL certificates require a pair of unique and unforgeable keys: a private and a public one.
- The private key is a separate text file that is used in the process of encrypting/decrypting data sent between the gateway and users. A private key is generated by the certificate owner when requesting the SSL certificate with a Certificate Signing Request (CSR) and is stored on the server. The Certificate Authority providing your certificate does not create or have access to your private key. In order to protect a private key file, you can use a password/passphrase. This prevents unauthorized users from decrypting the file.
- The public key is embedded in the SSL certificate and works together with your private key to encrypt all the information sent from the user to the server, which is decrypted on the server side with the private key.
In order to increase security and improve compatibility of the certificate with different browsers, Public Key Infrastructure utilizes a
certificate chain to provide a secure connection. The chain contains an end-entity certificate
CA bundle. A CA bundle is a file that contains root and intermediate certificates. You receive the complete CA bundle from the certificate authority as a single zip file with a *.ca-bundle extension or root and intermediate certificates as separate files. In case you have received the intermediate and root certificates as separate files, you should combine them into a single one to have a complete CA bundle. The certificates in the CA bundle should be in the following order: new key, new certificate, intermediate and CA certificates.
Keystore is a file that contains the SSL certificate signed by the certification authority and the private key and it password protected. You should provide a keystore name and password when you create the private key and Certificate Signing Request (CSR). For security reasons, we highly recommend using different passwords for UniPay and UniBroker keystore files.
The most common reasons for installing / replacing an SSL certificate:
- the secure connection has not been previously used
- the certificate has expired
- the domain has been changed
- the host has been changed
- other reasons (for example, you decided to change the certification authority)
Certificate requirements
- the certificate is signed by the certification authority
- the certificate is valid
- the domain and host are spelled correctly
Installation/Replacement Procedure
1) Before installing/replacing a certificate, make sure that you have generated the certificate and received the required attributes from the Certificate Authority. These attributes include:
- SSL certificate signed by the Certificate Authority;
- CA bundle with root and intermediate certificates;
- private key for the certificate;
- password for the certificate.
2) On the server, the certificate and the private key are stored in a
keystore. When installing/replacing the certificate, the keystore must either be generated or regenerated:
- if the certificate is being installed for the first time, the keystore is created on the application deployment level;
- if the certificate must be renewed, you should generate the keystore for Unipay and Unibroker using the Java openssl and keytool commands.
openssl pkcs12 -export -chain -in mycompany.crt -inkey mycompany.key \
-out mycompany.p12 -name mydomain -caname root
keytool -importkeystore \
-deststorepass “$PASSWD” -destkeypass “$PASSWD” -destkeystore unipay.keystore \
-srckeystore mycompany.p12 -srcstoretype PKCS12 -srcstorepass “$PASSWD” \
-alias mydomain || echo “need mycompany.p12
Note: PASSWD is the newly created keystore password.
3) Place the new keystore into the
UniPay and
UniBroker ~/resources directories on the server.
3.1. Place the certificate into the
UniPay and
UniBroker keystores.
4) Locate the HAProxy directory where the certificate file is stored.
The directory should contain the following files associated with the certificate: CA bundle, certificate, private key. As a rule, the path to this directory is
/etc/ssl/private/.
4.1. Place the certificate into the identified
HAProxy directory.
5) Should the system contain any other intermediate access points, such as firewalls, the certificate installation/replacement should be performed on these points in accordance with the product documentation.
6) Restart UniPay and UniBroker JBoss nodes and all
HAProxy nodes.
Certificate installation/replacement with the ansible scripts
1) Log in to the admin node under the uniadmin user.
2) Open the cd
~/jenkins/common directory.
3) Write the new certificate, private key and intermediate certificates to the
tmp/certificate.crt file.
4) Run the configuration script:
ansible-play site.xml -e @passwords -t haproxy
Additional steps
Be advised that there are additional steps you should follow when changing the Certificate Authority in case you use terminals:
Please contact our support team if you are replacing the SSL certificate with one signed by another Certificate Authority to make sure that this issuer is supported by our terminal logic.
Verification
After installing/replacing a certificate, you should user the following required and optional steps to make sure that the certificate is installed correctly, and the connection is secured.
Required steps:
1) To verify that the new certificate is installed/replaced correctly, use the
openssl command.
2) To verify that the connection is secured, perform a
Pingdom check by calling the following URL:
https://[your-server-name]/pingdom
If the certificate was added successfully, the pingdom message will be as follows:
In case the connection is not protected, you will get a standard browser message.
Optional steps:
3) Run
wget command.
Example of result in case of unsuccessful installation:
$ wget https://23.92.222.254/pingdom
--2019-02-06 12:49:17-- https://23.92.222.254/pingdom
Connectiong to 23.92.222.254:443… connected
ERROR: certificate common name ‘*.mydomain’ doesn’t match requested host name ‘23.92.222.254’.
To connect to 23.92.222.254 insecurely, use `--no-check-certificate`.
youruser@youruser-note:~
4) Run
curl command.
Example of result in case of unsuccessful installation:
$ curl https://23.92.222.254/pingdom/
curl: (51) SSL: certificate subject name (*.mydomain) does not match target host name ‘23.92.222.254’
5) Run a test transaction with the proxy emulator.
Data Purging
The amount of data in the database tables is continually growing. It can lead to a slowdown of the system. In the context of data storage in the database, there are tables that are used for diagnostic purposes (i.e. if errors occur, information from these tables is used to identify and solve problems) and tables that contain data that the system generates as a result of the performance of certain processes to simplify the execution of the processes, as well as for diagnostic purposes. After more than two weeks from the moment of creation of these records, they lose relevance and clog up the system, increasing the size of the database. To control the growth of such tables, the system has an automatic data deletion mechanism called
purging. Unlike the archiving mechanism (described below), in which the data is archived and stored in the database, the data is deleted permanently when this process is performed.
When purging is done, the following tables get cleaned up (the values in parentheses are the number of days of storage, after which the data is deleted):
- email_content.date (90 days and older)
- charge_transaction_log (365 days and older)
- terminal_log (60 days and older)
- terminal_log_content (60 days and older)
- iapp_job_message_archive (60 days and older)
- iapp_job_message_content (60 days and older)
- ftp_gates_file_content.processed_date (365 days and older)
- iapp_system_error_log (90 days and older)
- iapp_response_data.create_date (24 hours ago and earlier)
Performance Mode and Operation Principle of Mechanism
Settings for DB Purging
The purging-a mechanism is activated using a designated job -
unicharge.system.process-data-purging. To ensure that data is cleared using this mechanism, make sure that the job is active.
In addition, a property
unipay.system.purging_disabled is involved in the process, with which you can stop the process performance. The stopping process is described below.
Mode, Frequency, Time, and Duration of the Process Performance
After the job is activated, the process launches automatically once a day - from 3 am to 5 am server time.
Regardless of the amount of data that is going to be deleted, the process works no more than two hours in order not to overload the server. However, the duration of the process depends on the performance of the server and the amount of data that needs to be deleted. If purging is performed for the first time, the amount of data to be deleted may be significant, so cleaning can take up to 12 hours, depending on the volume. In this case, all data is not deleted in one day, and the entire volume is be divided into several parts, each of which is launched daily on a schedule (from 3 to 5 am) until all data is deleted. After the initial volume of data is cleared, purging will be triggered for new data that is older than a specific date (as described above). In this case, purging takes 1-1.5 hours.
DB Purging Monitoring
Expect documentation update soon.
Stopping of the DB Purging
There are two reasons why purging process has to stopped:
- The process got frozen. In this case, you need to execute the KILL command via MySQL.
- The process runs correctly, but you need to stop it. In this case, you can stop the purging process using the designated property - unipay.system.purging_disabled. This property controls whether to stop or continue purging after the end of each iteration into which the process is divided. By default, the property value is set to false. If you set it as true, then after the end of the next iteration the purging process will be disabled.
You can change the property value at two places:
- on the server
- on the user interface on the Settings form available under the System perspective => System button.
Running the Cluster Diagnostics Script
To fix a server issue, it is important to collect and analyze a set of logs parameters prior to performing any troubleshooting steps. For security reasons, United Thinkers DevOps engineers may not have access to all necessary instances on the clients’ production servers. Thus, in some cases, it may be impossible to perform diagnostics and collect all necessary information to fix the issue or provide detailed troubleshooting instructions.
Cluster diagnostics can be performed automatically with the script.
To run the diagnostics script follow these steps:
1. Download the archive with the script provided by the support team and unzip it to
/diagnostics directory.
2. Open
/diagnostics/inventory file and set the variables including:
- the name of the admin user that has access to all nodes on which the data should be collected,
- hostnames of the UniPay/UniBroker nodes,
- path to jboss instance on each node:
[all:vars]
ansible_user={uniadmin}
[unibroker:vars]
jboss_logs=/opt/unibroker/jboss7/standalone/log
[unipay:vars]
jboss_logs=/opt/unipay/jboss7/standalone/log
[unibroker]
{unibroker1hostnam}
{unibroker2hostname}
[unipay]
{unipay1hostname}
{unipay2hostnam}
3. Run the commands in a terminal to create a password vault and set the admin user password:
create a password for the vault
$ echo changeIt > tmp/vault_password_file
open the vault and set the admin user password
$ ./ansible-vault.sh create passwords
ansible_ssh_pass: changeIt
4. Check the nodes reachability:
$ ./ansible.sh nodes -m ping -e @passwords
If the PING check failed, double-check that all previous steps have been performed correctly.
5. Run the diagnostics script:
$ ./ansible-playbook.sh scripts/diagnostics.yml -e @passwords
6. Download the output archive (path:
/diagnostics/tmp/diagnostics/data_archive.tar.gz) and forward it to the support team for logs analysis.