This blog will focus on monitoring of standalone web dispatchers. Standalone web dispatchers are used to load balance web traffic towards ABAP and/or JAVA systems. Common use case is to have web dispatcher for a large Netweaver Gateway FIORI installation.
Monitoring productive cloud web dispatchers
Monitoring of web dispatchers focuses on availability and connectivity/performance.
The web dispatcher template contains most needed elements out of the box:
Issues with performance are often caused by limitations set in the web dispatcher configuration. Keep these settings active.
You might want to add specific custom metric to monitor the most important URL for your web dispatcher. Read more in this specific blog.
Next to this setup the normal host monitoring to make sure the file system and CPU of the web dispatcher are not filling up and causing availability issues for the web dispatcher function.
Monitoring non-productive web dispatcher systems
For monitoring non-productive web dispatcher systems, it is normally sufficient to restrict to host and availability moniotoring.
Content servers are often used to store attachment and data archiving files. They are technical systems with usually no direct access for end user. End users normally fetch and store data form content server via an ABAP or JAVA application.
The main part of content server monitoring is availability.
ABAP connection to content server monitoring
In some cases both your ABAP stack and content server are up and running, but communication between them is failing on application level. This leads to not working system for end users. Root causes can be firewall issues, certificate issues, or somebody altered settings.
To test the ABAP system connection to content server a custom ABAP program is needed. See this blog. You can schedule the program in batch and set up a new custom metric to capture the system log entry written by the program.
System host template
For system host the regular CPU, memory, disc template is sufficient. Finetune the thresholds to your comfort level.
Database template
Important items of the database template:
Database availability
Database health checks
Backup
In most installations it is chosen to install Content Server with the SAP MaxDB database (similar to LiveCache).
This blog will explain the use of batch job monitoring in SAP Focused Run 4.0. If you are using older SAP Focused Run 3.0 version, read this blog. If you are on 3.0 and did not use batch job monitoring, then don’t. First upgrade to 4.0 to avoid conversion effort.
For setup of batch job monitoring in SAP Focused Run 4.0, read this blog.
New powerful functions in SAP Focused Run 4.0 on Analytics and Job trending are explained below.
Batch job monitoring
Batch job monitoring in SAP Focused Run is part of Job and Automation monitoring:
After opening the start screen and selecting the scope you get the total overview:
Click on the top round red errors to zoom in to the details (you can’t drill down on the cards below):
Click on the job to zoom in:
Systems overview
Click on the system monitoring button:
On the screen, zoom out on the overview by clicking the blue Systems text top left:
Now you get the overview per system:
Batch job analysis
Batch job analysis is a powerful function. Select it in the menu:
Result screen shows 1 week data by default:
The default sorting is on total run time.
Useful sortings:
Total run time: find the jobs that run long in your system in total. These most likely will also be the ones that cause high load, or business is waiting long for to finish to give results.
Average run time: find the jobs that take on average long time to run. By optimizing the code or batch job variant, the run time can be improved.
Failure rate: find the jobs that fail with a high %. Get the issues known and then address them.
Total executions: some jobs might simply be planned too frequently. Reduce the run frequency.
By clicking on the job trend icon at the end of the line you jump to the trend function.
Job trend function
From the analysis screen or by selecting the Trend graph button you reach the job trend function:
Select the job and it will show the trend for last week:
You can see if execution went fine, or not, and bottom right see average time the job took to complete.
In Focused Run 4.0 the batch job monitoring was revised. If you are using older version of Focused Run, read this blog on older batch job monitoring setup.
Batch job monitoring is now combined with other automation functions like process chain monitoring. Open this tile:
Global settings
For batch job monitoring settings, open the configuration and start with the global settings:
Here you can see the data volume used and set the retention time for how long aggregated data is kept.
You can also set generic rating rules:
Activation per system
In the activation per system select the system and it will open the details:
First switch on the generic activation for each system.
Activation of jobs to monitor
Now you can start creating a job group. First select left Job groups, then the Plus button top right:
Add a job by clicking the plus button and search for the job:
Press Save to add the job to the monitoring.
Grouping logic
You can group jobs per logical block. For example you can group all basis jobs, all Finance jobs, etc. Or you can group jobs per system. Choice is up to you. Please read first the part on alerting. This might make you reconsider the grouping logic.
Adding alerting
The jobs added to the group are monitored. But alerting is a separate action.
Go to the Alerting part of the job group. And an alert. First select the Alert type (critical status, delay, runtime, missing a job). Assign a notification variant (who will get the alert mail), and decide on alert grouping or atomic alerts.
If you do not specify a filter it will apply for the complete group. You can also apply a filter here to select a sub group of the job group.
Based on the alerting you might want to reconsider the grouping.
This blog will focus on monitoring of Cloud Connector systems.
Monitoring productive cloud connector systems
The Cloud connector is used between on premise systems and Cloud solutions provided by SAP.
Monitoring of cloud connector focuses on availability and connectivity.
The cloud connector template contains all the needed elements out of the box:
If your landscape has only 1 cloud connector that is also used for non-productive systems, you might find a lot of issues in the non-productive system. Like expired certificates, channels not working, many logfile entries. If the cloud connector is very important for your business, it is best to split off the productive cloud connector from the non-productive usage.
This way you can apply sharp rule settings for production: even single issue will lead to alert. While on non-production the developers will be making a lot of issues as part of their developer process.
Monitoring non-productive cloud connector systems
In your landscape you might have a non-productive cloud connector that is used for testing purposes. In the non-productive cloud connectors you might apply a different template with less sensitive settings on certificates, logfiles and amount of tunnels that are failing.
Some filesystems are critical to a business, such as those used in interfaces. This custom metric group will alert if a filesystem is not mounted.
Create the Bash Script to Check the Filesystem Status
Firstly, we need to create a bash script that takes the filesystem as its input argument and then checks its status. Create the following script called /sbin/checkfilesystemmounted.sh (owner is root, permissions 755). You may put this script somewhere else if you prefer, but be sure to refer to the correct location later on in this post.
The findmnt command returns the mount details if the filesystem is mounted. The filesystem is passed as a script argument in variable $1. If the filesystem is mounted, the script returns integer 1. If the filesystem is not mounted, the script returns integer 0. For example, to check your desired filesystem, execute it like this as root:
The result should be as per the following example:
Webmethod returned successfully
Operation ID: 0A02C69098121EDDA68C041B50FE858D
----- Response data ----
description=Check if filesystem is mounted
{type:integer, name:FileSystemMounted, value:1}
exitcode=0
Create the Custom Alert in SAP Focused Run
In Focused Run, we create an alert in a Linux host monitoring template. For example, the alert name is “Interface Filesystem not Mounted”. The Alert should be in Category “Exceptions” and the Severity is up to you. In this case it is 9.
Create the Custom Metric Group in SAP Focused Run
Next, we create the custom Metric Group . A Metric Group allows variants to be created, and each variant corresponds to a filesystem you wish to monitor.
Overview Tab:
Name: “Interface Filesystem not Mounted”
Category: Exceptions
Class: Metric Group
Data Type: Integer
Technical Name: INTERFACE_FILESYSTEM_NOT_MOUNTED
Data Collection Tab:
Data Collector Type: Diagnostic Agent (push)
Data Collector Name: OS: ExecuteOperation
Collection Interval: 5 Minutes (depending on the criticality)
CUSTOM_OPERATION_NAME: checkfileystemmounted – This corresponds to the custom operation for saphostctrl created earlier
METRIC_NAME: FileSystemMounted – This corresponds to the name of the metric in the JSON output by the bash script
RETURNFORMAT: JSON – This is the output format of the bash script
Usage Tab:
Threshold Tab:
As the script returns a numeric value 0 if the filesystem is not mounted, then the threshold will alert if the value is 0.
Assignment Tab
Assign to the custom alert created earlier.
Add Variants
The variable passed to the saphostctrl operation is “FILESYSTEM”. We can add the rest of the filesystems as individual variants. The format for the operation parameters is as follows:
FILESYSTEM:/the/filesystem/you/want/to/check
For example:
You can enter as many filesystems as you like as separate variants.
Activate Alert
Go to the “Metrics, Events, Alerts Hierarchy” tab, and activate System Monitoring.
Testing the Metric
In a non-Production environment, try to unmount a filesystem, and at most 5 minutes later, there should be an alert produced.
This blog will focus on monitoring on EWM systems.
Monitoring productive EWM systems
EWM systems are at the often used as stand alone systems that make sure logistics and warehousing can keep running at high availability. If the connected ECC or S4HANA system is down, EWM can continue to support logistics operations.
EWM can be older version based on SCM/BI system core. Newer EWM systems are using S4HANA with EWM activated as standalone.
Extra in an EWM system are the use of qRFC and the CIF (Core interface). And many EWM systems have users that interact with the system via ITS GUI based handheld scanners.
CIF monitoring
The CIF is the core interface between SCM and ECC system. The interface typically uses RFC and qRFC. And it is working both ways.
Setup for the CIF specific RFC’s and qRFC’s the monitoring:
For a BW system some numbers are typically higher than on an ECC or S4HANA system. Response times of 1.5 seconds would indicate horrible performance on ECC, but are normal on BW system.
System host template
For system host the regular CPU, memory, disc template is sufficient. Finetune the thresholds to your comfort level.
This blog will focus on monitoring on SLT systems. These systems are mainly used to replicate data from source systems like ECC and S4HANA towards target systems like Enterprise HANA and HANA cloud.
Monitoring productive SLT systems
When monitoring a productive system, you will need to finetune the monitoring templates for:
ABAP 7.10 and higher Application template, for the ABAP application
ABAP 7.10 and higher Technical instance template, for the ABAP application servers
System host template
Database template
ABAP APPLICATION TEMPLATE
Make sure you cover in the ABAP application template the following items:
Availability:
Message server HTTP logon
System logon check
RFC logon check
License status
Certificates expiry
Update status
Performance and system health:
Critical number ranges
SICK detection
Dumps last hour
Cancelled jobs last hour
Security:
Global changeability should be that the system is closed
Locking of critical users like SAP* and DDIC (see blog)
Fine tune the metrics so you are alerted on situation where the system is having issues.
SLT uses far more background and dialog processes than a normal system. It is basically continuously busy processing records.
SLT DMIS template for SLT system
For SLT systems, apply the SLT DMIS template:
In the SLT system itself, make sure job /1LT/IUC_HEALTH_C with program R_DMC_HC_RUN_CHECKS runs. This will collect data that is needed for SLT itself, but which is also re-used by SAP Focused Run.
Anyhow you should make sure to regularly apply the notes for the DMIS component. See this blog.
SLT DMIS dummy template backend system
For SLT to work, the DMIS component is installed in both the SLT system and the backend system. For the backend system SLT component, Focused Run will pick up the template as well. But this will not make any sense in monitoring, since it is the source system and not the SLT system.
For this reason, set up a dummy empty template with every monitoring item disabled:
Assign this dummy template to your backend system.
ABAP APPLICATION SERVER TEMPLATE
Make sure you cover in the ABAP application server template the following items:
Availability:
Local RFC logon test
Local HTTP logon test (if any BW web scenario is used)
Process Chain Monitoring in SAP Focused Run is possible via Job And Automation Monitoring which is available as of SAP Focused Run 3.0 FP02.
You can launch the Job & Automation Monitoring app in the Advanced Application Management section in the Focused Run launchpad.
When you launch the app you will be asked for a scope selection for which you can specify the systems for which you want to activate Process Chain Monitoring.
To start the setup of Process Chain Monitoring click on the settings button .
In the settings popup click on the pencil button under Technical Systems.
In the next popup select the system for which you want to configure process chain monitors by clicking on the area as shown below.
In the next screen , in the Monitoring tab click on the + sign to create a new filter to activate data collection.
Now provide a filter name and then in Job Type select SAP BW Process Chain and save.
After creating the filter move to Alerting tab and click on the + sign to create a new alert.
You can create the following types of alerts.
Critical Execution Status: The Execution Status is rated green, if a job finished successfully and red, if the job execution did not finish, i.e., aborted. It is rated yellow, if a job finished with warnings or errors without aborting.
Critical Application Status: The Application Status is rated green, if a job successfully processed the application data. It is rated red, if e.g., an ABAP job execution writes errors into the application log and yellow, if there are warnings, but no errors.
Critical Delay: The Start Delay rating is rated green, if the technical delay of a job (e.g., in case of an ABAP job the time passed until a job gets a work process assigned) did not exceed the threshold defined.
Critical Runtime: The Run Time is rated green, if the runtime of a job did not exceed the threshold defined.
To create the alert first select the alert type.
Then in Alert Filters section provide the BW Process Chain name for which you want to activate alerting. Also you must provide the job type as BW Process Chain. You can optionaly enter further filters like Execution User, Executable Name, ABAP Client and whether it’s a Standard Job or not.
Note: With Job & Automation monitoring you can create alerts for Standard ABAP jobs as well. The filters Executable Name and Standard Job is applicable only for ABAP Job type.
Optionaly you can also set Notification Variant, Alert Severity and enable Automatic Alert Confirmation in the Alert Settings section.
Optionaly you can also provide Resolution Instructions in the Alert Resolution area.
If you select alert type Crtical Dealy or Critical Runtime you also have to enter the thresholds.
Finally click on the Save button to save and activate the alerting.
Note: When we activate monitoring of process chain by creating the filter in the Monitoring tab, we activate data collection for Process Chain monitoring. This will enable data collection of all process chains of that managed system. You can see status of all process chain runs for that system in the main page of the Job & Automation Monitoring app. Additionaly and optionally you can create/enable alerting in the Alerting tab to alert on specific process chain failures.
Note: Since the launch of Job & Automation Monitoring in Focused Run 3.0 FP2 the old Job Monitoring feature has been renamed to Job Monitoring ABAP Only. The Job Monitoring ABAP Only functionality is completely depricated as of release of Focused Run 4.0.
For more details on Job & Automation Monitoring you can refer to SAP documentation here.