Skip to end of banner
Go to start of banner

Utilities

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

On this page

Overview

Utilities is a set of tools necessary to implement, maintain or support business processes within the system. These tools include a scheduling engine responsible for scheduling and executing batch processes in CRM.COM, a data integration mechanism used for data imports and exports and an archiving utility for archiving old data to control the size of a database.

Major features

  • Execute and manage batch process tasks, such as stopping or deleting them
  • Supports single entity rollback upon batch process task interruption. For example, if the billing is interrupted after billing 1000 accounts, the changes will not be roll-backed. However, if upon interruption the billing of a single account was not complete, then the system will revert any changes done on the specific account.
  • Archiving of data to control database size. Records associated to archived data are marked for reference.
  • Import hardware directly in warehouse and create rewards participants using import files. 
  • Export vouchers
  • Use Pentaho kettles for custom-made imports and exports. Schedule imports and exports to run once in the future or on a recurring basis.

 

Related configuration areas

Mandatory modules must be configured for the Utilities module to work.

Manual link
Area
Module Description
Configuration
PlatformScheduler Settings

Scheduler settings are used to define the server(s) which will handle batch processes sent to the scheduler with the option to

Mandatory

Using Archiving Utility

Foundation Application > Utilities > Archiving Utility > Perform Data Archiving Runs

Foundation Application > Utilities > Archiving Utility > Archived Data

As the exponentially increasing data of many businesses can result in the lack of storage space on database servers and the consequent slowing down of a system, archiving older data created up to a certain date is crucial. The utility compresses data that no longer has financial or marketing impact and saves it in a specific archived structure within the database, while updating the existing entities still available in the system which are associated with the archived data.

For every 'data archiving entity' a record is created and available from archived data to be downloaded or permanently deleted if it is no longer needed. The execution of the archiving utility and the deletion of archived data can only be performed by system users designated for each process through a username and password.

Non-archived data which refers to archived data such as, awarded events, spend requests, customer events and spend reward transactions can still be accessed without restrictions but not modified. No action can be applied on such of data, apart from deleting it (if it is allowed).

The following data can be archived

  • Purchase Customer Event
  • Spend Request Customer Event
  • Referral Customer Event
  • Social Media Customer Event
  • Web Customer Event
  • Achievement Customer Event
  • Financial Achievement Customer Event
  • Financial Statement Customer Event
  • Provisioning Requests
  • Process Run Logs
  • Rated Billing Items

Apart from the explicitly defined entities other associated entity records may also be archived even if not explicitly defined, when are associated with an archived record. The following entities are subject to archiving.

  • Awarded Events
  • Spend Reward Transactions
  • Provisioning Request Parameters
  • Process Run Log Entity
  • Applied Additive Discounts

Archiving Data Run Definitions

To archive data you must set up an Archiving Data Run Definition

  1. Navigate to the Archive Data Run Definition and click on NEW from the Actions menu
  2. Provide all the mandatory information along with the Criteria
    1. Data Archiving Entity: the entity you wish to archive 
    2. Archive Data Created X Months: determines the data that should be archived based on the date on which it was created in the system. The minimum (and default) value when creating a new definition is 12 months.
  3. Once you configure the definition SAVE, it and SUBMIT to the scheduler so that it is executed and data is archived
  4. A list of the executed runs (a run is created every time the definition is submitted) is available in the process runs

Viewing archived data

Once the utility is executed, the archived data can be accessed from a dedicated screen from it can be downloaded or deleted.

 

Downloading archived data

  1. Navigate to the archived data summary page and search for the data of interest.
  2. Click on DOWNLOAD.
  3. Provide a password.
    The compressed archived file will be downloaded

Deleting archived data

  1. Navigate to the archived data summary page and search for the data to delete.
  2. Check the select box on the left of the item to be deleted.
  3. From the Actions menu click on Actions > Delete.
  4. Provide a password.
  5. Click on Delete.
    The archived data will be deleted.

 

Back to top

Using the Scheduler


Foundation > Utilities > Manage Scheduled Processes

 

The scheduler is used to execute the batch tasks that are scheduled in the system.   A batch task is a series of steps involving the simultaneous handling of multiple entries, without the need for manual intervention.

Every time a run definition is 'submitted' a task is created and sent to the scheduler for processing. Depending on the 'scheduler settings' defined on the run definition, the task may be executed and removed from the scheduler (one-off), or it may be executed and the next upcoming task scheduled instantly (repetitive tasks).

The scheduler can be in either of two states:

  • 'Running':  All tasks that are 'Effective' and have a CRON Expression equal to the current time of the scheduler will be executed. 
  • 'Standby Mode': No Task will be executed, even if it is scheduled.

Scheduled processes are picked up and applied only by the Application Servers which are registered with the organisation and have permission to perform each specific process. If no specific Application Server is defined for a specific process, then the default Application Server registered for the organisation is used. Refer to Scheduler Settings for more information

 

  1. Navigate to the scheduler utility and explore scheduled tasks via the Summary page. 
  2. Click on one of the entries to access the Data Entry page of the scheduled tasks to DELETE a scheduled task or STOP a running one.
  3. Click on ACTIONS from the Actions menu and select one of the following options:
    • VIEW SCHEDULED TASKS: Displays tasks which are scheduled for the organisation.
    • VIEW RUNNING TASKS: Displays tasks which are already running for the organisation. 
    • VIEW COMPLETED TASKS: Displays completed tasks. 
    • VIEW LOG: Displays the scheduler log.  The tasks executed during the current month and their results can be viewed, by clicking on the day of your interest.
    • START SCHEDULER: Starts the scheduler. This action can be applied only if the current application server is enabled to run scheduled processes for the specific organisation. More information is available at Scheduler Settings.
    • STOP SCHEDULER: Stops the scheduler.  

 

Task fields

For each task the following information is available either through the Summary  page

Task Number

Organisation: represents the database on which the task is scheduled for. Particularly helpful if you have multiple organisations /databases) being handled by the scheduler of one application server

Class & Method: the name of the java class and specific method used to execute the task

Parameters: The parameters that will be processed by the scheduler. Multiple parameters can be specified by using the character |asseparator. Default parameter type is String. Parameter types can be defined using the notation type@value, example of Integer plus Double parameters: java.lang.Integer@1|java.lang.Double@10.2 .

Application Server: The name of the application server whose scheduler is being used

CRON Expression: The CRON expression that defines when the task should be performed.

State: The state of the task which can be one of the following:

  • 'Draft.'
  • 'Effective.'

 

Deleting a scheduled task

A task that has not yet been executed can be deleted, so that it is never executed. 

  1. Navigate to the Scheduler Utility.
  2. Click on the task to navigate to the task's Data Entry page.
  3. Click on DELETE.
  4. The scheduled task is deleted permanently and removed from the list.

Stopping a running scheduled task

A task that is already running can be stopped, once the currently executing step is completed. 

  1. Navigate to the Scheduler Utility.
  2. Click on the task to navigate to the task's Data Entry page.
  3. Click on STOP.
    1. Provide the reason for stopping the task.
  4. The following information is kept on the associated log entry of the scheduler task:
    1. The user that stopped the task.
    2. The date and time that the task stopped
    3. The reason for stopping the running task. 
  • Stopping a task will not revert the changes that already took place apart from changes on the last processed record if they were not completed.
  • The log will be available on the server and can be provided by a system administrator

 

Back to top 


Using Data Imports & Exports


 

 

Foundation > Utilities > Import Data

Foundation > Utilities > Export Data

Various CRM.COM processes have been implemented to manage fast import and export of data. 

Imports use files (for which templates) are available to import data from third party systems into CRM.COM. Refer to the following processes for details on how to use

Exports on the other hand, place CRM.COM data in a structured file and export it for use from third party systems. Refer to the following processes for details on how to use

Back to top 

Using Pentaho Imports & Exports


 

Foundation > Utilities > Integrate Using Pentaho

The Pentaho Utility is suitable for custom-made kettles. A kettle is a data integration tool implemented by Pentaho and embedded in CRM.COM. Both import and export kettles can be scheduled to be executed once or on a recurring basis. 

CRM.COM imports and exports are handled by batch processes.

Imports

Data can be imported in bulk at any given time through the Pentaho utility, which uses dynamic interfaces implemented as kettles.  The data is imported in a predefined file type.

  1. Navigate to Integrate Using Pentaho

  2. Choose the module of your interest.
  3. Locate and click on the import kettles to run.
    1. Click on the Template link to download the template of the import file.
    2. Create and save the file.
    3. Click on the file.
      1. The import details will open in a new tab
      2. Click on Choose File to browse and select the created file.
      3. Once the file is uploaded click on Next.
        1. Provide the required parameters (if any).  Each import Kettles requires different parameters. 
        2. Click on Start Import to execute the import.
        3. Click to Close the page and go back to the Main page.
  4. Once the Kettles is executed results are made available on the Main page.
    1. Click on View Results next to the executed import to get a list of instances on which the import was executed together with the Run's successful records and those with errors.

 

Exports

Data can be exported in bulk at any given time through the Pentaho Utility, which uses dynamic interfaces implemented as Kettles. The data is exported in a file type that is predefined and automatically generated by the System. 

  1. Navigate to Integrate Using Pentaho

  2. Chose the module of your interest.
  3. Locate and click on the Export Kettle to run.
    1. The export details will open in a new tab.
    2. Provide the required parameters (if any).  Each export Kettle requires different parameters. 
      1. Click on Start to execute the export directly.
      2. Click to Close the page and go back to the Main page.
  4. Once the Kettle is executed results are made available on the Main page
    1. Click on View Results next to the executed Export for a list of instances on which the Export was executed together with the Run's successful records and those with errors.

Only successful records and those with errors that are defined when creating the 'Kettle' as 'to be logged' will be shown. If nothing is defined as 'to be logged' then no information will be available under Successful or Error drill-downs.

 

Scheduling import and export kettles

Kettle runs for imports or exports can be configured to start automatically.  The  information is specified and the process is triggered from the Pentaho Imports & Exports screen. 

Once a schedule is created and submitted, the Kettle will be executed once or on a recurring basis, depending on the Scheduling Settings

  • Navigate to the Integrate Using Pentaho screen and explore existing Pentaho Imports and Exports.
  • Select the Pentaho Import or Export of your interest and go to the Data Entry page to enter the required information.
  • Click on VIEW SCHEDULED to view existing Scheduled Kettle Runs or on SCHEDULE to schedule a new one.

Back to top 

 

Utilities Business Examples

Archiving of purchases

Archiving old data

Scenario 1

Company ZX would like to save space in its database, by a clean-up process that is executed on an annual basis. Data older than 6 months related to Purchases logged in the System should be compressed and removed from the database.


 

Solution

Configuration

An Archiving Definition Run must be set as follows:

Data Archiving EntityArchive Data Created X Months Ago
Purchase Customer Events6

User Process

Authorised Personnel, i.e. Super Users, should access the Run Definition and submit the process to the Scheduler, every time that the cleaning process must be executed. A password will be required to ensure that the archiving is submitted only by authorised users.

Notes

 

Back to top 

  • No labels