Friday, November 28, 2008

v8.9 - Time and Labor – Absence Management Integration Technical Details

This document discusses the technical details of the integration of Absence Management with Time and Labor without North American Payroll.
/*3rd October, 2012 - Updated the title to note that this post represents the integration in v8.9. The integration between Time and Labor and Absence Management has been completely revamped in v9.0.*/

Map a TRC to an Absence Management Take Code as shown in the diagram below:



Table that holds the mapping details of a TRC and an AM Take code – TL_ERNCD_TBL.

All the Absence data that will be sent to Time and Labor is stored in the record – GP_ABS_EVT_WRK.
This table is populated through peoplecode (GP_ABS_EVENT.EMPL_RCD.FieldFormula) when a manager approves absences. Note that only data in the above table is taken up by the integration program. So, ensure that all the data that is expected to be sent to Time and Labor is present in this table before running the integration program.
Note that this table is purged after the successful run of the integration program – so it is recommended to take a back up of this table before running the integration process. This table will contain all Added, Deleted and Changed absence data.
Note that this table will be populated only if the employee is active in Time and Labor and has a pay system flag of ‘GP’ in PS_JOB.

Run the ‘Retrieve Absences’ process from Time and Labor, as shown below:



This process triggers the TLAGGUPDATE AE. The main purpose of this AE is to populate the TL_ST_ELPTIME work table from the data present in GP_ABS_EVT_WRK table. This AE takes all absences present in the GP_ABS_EVT_WRK table, checks for the Event Action (whether Add, Change or Delete) and populate the TL_ST_ELPTIME table. The AE also converts the reported days of Absences into corresponding number of hours for use by Time and Labor. This is done by checking the absence days with the schedule for that particular day for the employee.
The TLAGGUPDATE AE calls the TL_AGG_SECTN AE where the insertion into TL_ST_ELPTIME table takes place.
GP_ABS_SUM_WRK and GP_ABS_DTL_WRK are two other important work tables used by the TLAGGUPDATE AE. Note that all the tables mentioned above are truncated after the process runs and the presence of data in the above tables after the run of a program is an indication that insertion into TL_RPTD_TIME has not happened.

The TLAGGUPDATE AE finally calls the TL_ST_LIB AE which transfers data from TL_ST_ELPTIME to TL_RPTD_TIME. The latter AE is triggered using the CreateProcessRequest command and thus it will run as a separate process from the TLAGGUPDATE process.

The time from Absence can be identified by looking for rows with an RT_SOURCE value of AM in TL_RPTD_TIME table.

The below screenshot depicts how the Absence Data appears on the TimeSheet:



Side Notes:
> Check for data in the GP_ABS_EVT_WRK table to estimate the rows that will be sent to Time and Labor.
> Ensure that each employee is assigned to a proper schedule and that the schedules are properly set up.
> Data is loaded into TL_ST_ELPTIME table from where it is loaded into TL_RPTD_TIME by the TL_ST_LIB AE.
> If the process has not run properly GP_ABS_WRK, GP_ABS_SUM_WRK, GP_ABS_DTL_WRK and the TL_ST_ELPTIME tables will have data.
> Check for the loaded rows in TL_RPTD_TIME by looking for a RT_SOURCE = ‘AM’.

Saturday, November 1, 2008

Time and Labor - Absence Management Integration in HRMS 8.9

Its been real long since I've been able to post here! Thanks to the internet security at work!! Thought of writing down my experiences with integrating T&L with Absence Management in 8.9. This is one real exciting area in Time and Labor with more and more customers going in for Absence Management and T&L rollouts in the same implementation. From a business perspective, one cannot really see T&L and AM as different entities, though both have their own unique functions, they (and more so T&L) need to be tightly integrated and share data for meaningful business processing. Let's look at some scenarios where we need the integration:

> Payroll needs both reported time as well as absence data. Depending on the Pay System, T&L and AM data has to be collated for this purpose.

> Users would like to see/report time as well as absence on the same screen (this is one of the major enhancements to T&L in 9.0 with the feature to report/view absence data on the Timesheet. I think its a wonderful feature and a long awaited one).

> A number of T&L rules are dependent on Absence data (for example, don't apply certain rules if the employee was absent on a particular day).

Now, let's explore the integration of these two modules (the discussion is specific to 8.9, I know its a bit anachronistic, but I hope it will be useful to people still on this version!). The integration path varies depending upon the payroll system used. There are two separate processes - one for use if the Pay System is GP and the other if the pay system in NA Payroll (note that there is no delivered integration method if the Pay System is Payroll Interface). What both these processes do is to take absence data and put it into the reported time table. Then what is the difference between these two processes?
The NA Payroll process can be run only after an Absence Run has been finalised and takes data from the GP Positive Input table (GP_PI_GEN_DATA). This ensures that we have the rightly calculated data coming over to T&L. But, for a lot of users this is a major disadvantage of the process, for you cannot view absence data on Time and Labor before an Absence Calendar is finalised!
But, the case is different with the GP integration process. This takes data from a work table (GP_ABS_EVT_WRK) which is populated after a manager approves/deletes/changes absence requests. This means, that Absence data is available to T&L even before the Absence Run is started. But, this comes with a major disadvantage. The absence data that comes to T&L may not be the correct finalised absence data. In one sense its alright if GP is used as the payroll product. Things get murky if one is using a third party payroll system. In this case, the absence data in the payable time table of T&L might be incorrect as the data has come over before the Absence Run has taken place.
This is a major drawback of the current integration and every 8.9 implementation that looks to integrate AM and T&L need to keep this in mind.
I would be interested to know the ways people have worked around integrating T&L and AM with a third party payroll.

Post Note: How to identify the data that has come from Absence Management in T&L?
Query TL_RPTD_TIME for a RT_SOURCE = 'AM'.


Wednesday, August 20, 2008

On Oracle Optimisers

Found this elementary document on Oracle Query Execution and CB and RB Optimisers in scribd - embedding here for posterity.

Read this document on Scribd: Oracle understanding optimizers

Thursday, July 31, 2008

Overview of Time and Labor


Time And Labor Overview

From: jijucvengal, 4 hours ago





First in a series of presentations on Peoplesoft. For more information visit my blog at www.peoplesofthrms.blogspot.com


SlideShare Link




Tuesday, July 29, 2008

Peoplesoft popularity by numbers!

Every other day I indulge in some kind of debate regarding the future of Peoplesoft and the imminent assault of Fusion on the PS breed!! I must say that its murky times for us Peoplesoft consultants though Oracle has gone overboard to support and develop the product. Considering this, it was pleasant to read this post on the Grey Sparling blog about the popularity of Peoplesoft. Some interesting numbers from the article:

  • 9 of top 10 commercial banks are ps customers
  • 59% of top 100 of fortune 500 companies own ps
  • retail - the 5 biggest use ps
  • 6 of top 10 communications companies use PeopleSoft
  • 60% of the top 15 insurance companies use PeopleSoft
  • 70% of top 10 health care organizations use PeopleSoft
  • 19 us states use PeopleSoft
  • 50 of largest counties and cities use PeopleSoft
  • 7 of top 10 research universities use PeopleSoft
  • 8 of top 10 printing and publishing companies use PeopleSoft
Numbers and statistics can be misleading if not read in the correct context, but considering the features of the product, the way Peoplesoft has evolved under Oracle by incorporating so many of the Fusion middleware features and the very vocal commitment of Oracle to Peoplesoft customers tell me that we should be in business for quite some time!

Auto creation of user profiles

There are multiple scenarios where its required to create new user profiles in bulk. A recent requirement I had was to create user profiles for all employees in the system during a pilot upgrade exercise. The best approach to this problem is to achieve the same by a CI built on the USER_SAVEAS component (Copy User profile component).
Was mulling over the various options to auto generate passwords for the user profiles and ran into the delivered function in DERIVED_HINT.EMAILPSWD.Fieldformula event.

Friday, July 11, 2008

SQR Migration Strategies

In continuation to my earlier post on SQRs and XML Publisher, this discussion aims to arrive at a decision matrix to migrate from SQRs to other Peoplesoft/Fusion tools. A number of organisations that run on 8.0, 8.3 and other older versions of Peoplesoft have widely utilised SQRs for non-reporting scenarios like Inbound Interface processing, Mass Updates, Utility processes etc. With the very vocal recommendation of Oracle that customers should refrain from using SQRs (though its unlikely that Oracle will stop supporting SQRs on Peoplesoft any time now) in newer releases, it is extremely prudent to have in place a strategy to move all the SQR programs to an alternate tool. The decision matrix below has been prepared by taking into consideration the various type of SQRs programs as well as the capability/weaknesses of the other tools. This is a very high level illustration and your expert comments and thoughts are invited to further refine this tool.

Type of SQR

Migration Strategy

Inbound Interface – Data source: File

File Layout + AE + CI if insertion is done to a component

Inbound Interface – Data source: Peoplesoft

App Messaging/IB

Mass Data Changes (Mass hires, Mass update of a table etc.)

AE + CI if updation to a component is done. Analyse whether the Peoplesoft Mass Change functionality can be utilized.

Outbound Interface (Flat files, Dataset)

AE + XML Publisher for report generation/ Web Services if 3rd party supports

SQCs

Custom SQCs can be replaced by Application Classes to be invoked from Peoplecode.

Utility programs (Clean up, File manipulation, Audits, Status Checks)

Audits, File Operations, OS related processes are better done by SQRs, retain them. Utilities that affect database can be replaced by AE.

Letters and Forms (online forms as well as letters that are mailed)

XML Publisher + Peoplecode notification classes for mails

Reports – Simple single data source (reports with one BEGIN-SELECT)

XML Publisher

Reports – Simple multiple data sources (reports with more than one BEGIN-SELECT and simple calculations)

Migration of these might require the use of AE to execute the calculations and XML/XSL modifications. Move to XML Publisher if the logic is not complicated, otherwise retain SQRs.

Reports – On-breaks and multiple reports

XML Publishers

Reports – Complex with multiple BEGIN-SELECTS and complex conditional logic

Do not migrate – the effort is not worth it.

Reports – Status of batch process

This category contains the production of error reports and status reports of processing done by an AE or other batch programs – move to XML Publisher.

Thursday, July 10, 2008

Time and Labor Security demystified

Security in Time and Labor is an area that needs more clarity to most new implementors. Almost every team I have consulted for has had the question 'how do I ensure that the employees reporting to a manager shows up in the manager self service pages of Time and Labor?'
This post aims at being a guide towards designing the T&L security setup.

T&L Security is primarily concerned with ensuring the correct access of reportees to managers as well as controlling the dates which are open to any employee to report time. We are more concerned with the first case here.

Security in Time and Labor is a two step process - 1. Grouping the employees reporting to a manager/HR administrator together and 2. Granting access to the correct groups to the respective managers. To group employees together T&L has a powerful feature called Dynamic groups (there is another category called static groups which are not practically used and thus does not deserve mention here) which is similar to the Group Build feature in core HRMS. A Dynamic Group constructs a group of employees based on an SQL selection criteria (for example - select emplid from ps_job where reports_to = ) and is thus truly dynamic. The Dynamic Group once created has to be refreshed on a daily basis to ensure that the group includes all eligible employees.
The illustration below shows how a dynamic group can be created by defining the appropriate SQL eligibility criteria.

After the group has been created, the next step is to assign this group to those managers/administrators who need access to the same. Access in Time and Labor is controlled by an Operator's ROWSECCLASS. So the groups will have to be linked to the eligible ROWSECCLASSES to enable access for managers to their reportee's data. The illustration below shows how a group can be linked to a ROWSECCLASS. Keep in mind that one group can be linked to multiple ROWSECCLASSES and one ROWSECCLASS can be linked to multiple groups.



Now coming to the important question - how does a manager get access to employees under him? Let's just look at the below process flow for that:

1. Manager logs in with his/her Operatorid.
2. The ROWSECCLASS of this Operator is fetched from PSOPRDEFN.
3. The TL_GRP_SECURITY table is queried to get all the groups that this ROWSECCLASS has access to.
4. The TL_GROUP_DTL table is queried to get all the EMPLIDs part of each group fetched in step 3.
5. The employees fetched are displayed on the MSS pages.

This is an extremely simplified definition of what actually goes behind the scenes when the system tries to retrieve the employees in a T&L MSS page. So to ensure that a manager has access to his reportees ensure the following:
a. All the reportees are part of atleast one Dynamic Group. Query TL_GROUP_DTL to get the groups of an employee.
b. The manager/administrator has access to those groups. Query TL_GRP_SECURITY to get all the groups that a particular ROWSECCLASS has access to.
c. Schedule the Dynamic Group Refresh process on a nightly basis to ensure that the dynamic groups are up to date.

Caveat: The above process requires the creation of a large number of ROWSECCLASSES and Dynamic Groups to maintain exclusivity of employee access to managers. Organisations that follow the above approach will need to custom create a program thatwill create new ROWSECCLASSES when a new manager is hired and programatically create dynamic group and assign security to that dynamic group for the new ROWSECCLASS created. This complex process can be circumvented by customising the code behind the 'Get Employees' button present in all T&L MSS Employee Search pages. The approach would be to create a single Dynamic Group and enroll all employee in the same and give access to all managers/administrators for that single group. Customize the Peoplecode behind the Get Employees button to fetch only those reporting to the manager from the entire set fetched by the vanilla logic explained above. The customisation has to be done in FUNCLIB_TL_CP.MGR_SS_FUNC.FieldFormula event.





T&L security design is an area where we see lot of questions being raised, especially considering the integration of AWE in v9.1. If you have specific questions on this, reach out to me @ jiju@hroiconsulting.com

Peoplesoft Time and Labor Setup Guide: Summary

Having completed over seven end to end roll outs in Peoplesoft Time and Labor - my team recently decided to design a quick implementation package for Peoplesoft Time and Labor implementations. This package contains ready made data loading programs for every T&L setup component, a complete T&L design strategy guide which talks about the best practices in designing the various data elements, ready made Peoplesoft HRMS to Time and Labor integration program, an exhaustive Client Questionnaire gathered from our experience that will help the team capture even the smallest client requirement at the earliest stage, a Data Collection form for all T&L components and a reported time mass load program. These tools will drastically reduce the implementation time for any T&L implementation and is a ready reckoner for any organisation going for a T&L implementation.

Pasted below is a summary of the setup sequence as well as set up methodology for all T&L components, an excerpt from the setup guide.

Setup Component

Data Loading Method

T&L Installation

Manual

Time Zone Offset

Manual

Time Periods

Manual

Pay System

Manual

TRC Category

Manual

TRC

AE

TRC Program

Manual/AE depending on volume of data

Compensatory Time Off Plan

AE

Override Reason Code

Manual

Time Reporting Template

Manual

Shifts

AE

Schedule Group

Manual

Schedule Definition

AE

Holiday Schedule

AE

Value List

Manual

Template Built Rule

Manual

Rule Program

Manual

Task Template

Manual

Task Profile

Manual

Task Group

Manual

Time Administration Options

Manual

TL Permission List Security

Manual

Dynamic Group Creation

Manual

Employee Enrolment

AE

Compensation Plan Enrolment

AE

Schedule Assignment

AE

Wednesday, July 9, 2008

SQR and the XML Publisher Juggernaut

I have been surprised at the vigour with which Oracle has been promoting its Fusion reporting tool - XML Publisher. Any documentation/presentation/webcast on Fusion Tools talks in great length about the features and advantages of XMLP, how it has been integrated into Peopletools and more importantly how its going to be the only reporting tool for Fusion moving forward. One of the earliest documents I read on Strategies for Peoplesoft customers to be fusion ready had abandoning the use of SQRs and embracing XMLP as one of the main fusion oriented decisions for a Peoplesoft customer. Toeing this line, almost all new Peoplesoft implementations are religiously rejecting SQRs and embracing XMLP as the standard reporting tool.Is this the end of the road for SQRs? Probably yes!As much as I have seen, XMLP is a wonderful reporting tool. Creating a report using XMLP is many times easier, faster and elegant than SQRs. Its powerful feature that the layout can be designed separately will make it a hit with the business end users - who can now design their own desired layout on a Microsoft Word Document without any help from IT (I had a good laugh when I realised that all that I had to do to add a header with the Company Logo in XMLP was to add a header in MS Word! Huh? That's it?! Good bye to all the SQR coding! At hindsight another major advantage is with spacing the report. Adding a new column to an SQR report will require programatically adjusting the column number of various variables being printed to align to the new layout. But with XMLP, it would just require us to drag the fields to their appropriate places!). The Peoplesoft team at Oracle have done well to integrate XMLP in the Peopletools environment. XMLP is the newest addition to the Reporting Tools in PIA and there are a flurry of new Peoplecode classes (refer Application Packages starting with PSXP_) that facilitate the running of XMLP reports from onlines pages, AEs etc. But is XMLP a complete replacement to SQRs? XMLP taken alone is just a reporting tool, while SQR is a programming language. The flexibility and power of SQRs to handle extremely complex reporting requirements (working with multiple data sets, calculations, conditional logic, usage of data structures etc.) can not be matched by XMLP. For example can I write a XMLP report that fetches all the employees who returned from sick and maternity leave today (for this I will need to fetch the action reason of the LOA row previous to the latest RFL row assuming that sick and maternity are captured using the LOA action) and execute different processing logic depending on the leave reason? No. XMLP is not designed to work with multiple data sets and can not handle such a scenario where conditional logic has to be executed on multiple data sets (first on the current effective dated row to fetch all RFL rows and second to find out those RFL employees who had gone on LOA due to sickness or maternity). This is the forte of SQRs - the ability to handle any complex logic being a procedural language. So the more important question is - is it wise completely abandoning the use of SQRs as a matter of strategy? Both XMLP and SQRs have their own strengths and why can't we continue to use SQRs for requirements that are easier to code and better handled by that tool? For me, the most prudent approach will be to have a mix of both tools and to judiciously choose the right tool for the right requirement.

Custom names for SQR multiple report outputs

I had a requirement to produce two PDF files from a single SQR with specific file names - say RFL011008_USA.pdf and RFL011008_CAN.pdf. While there is significant documentation on generating multiple reports in SQRs, I thought that it would be instructive to share the steps to produce multiple reports with unique output file names.

The file names of the various output files are controlled by the output-file-mode parameter in the sqr.Ini file and it usually follows the convention of sqrname_XX.ext (like sqrname_01.ext, sqrname_02.ext). To override this default file naming convetion, use the NEW-REPORT command.
A sample program to produce multiple reports is as follows:

BEGIN-SETUP

DECLARE-REPORT USA
END-DECLARE

DECLARE-REPORT CANADA
END-DECLARE

END-SETUP


BEGIN-PROGRAM

LET $USA_FILE = /OPT/PSOFT/HRMS/SQR/RFL011008_USA.PDF
LET $CAN_FILE = /OPT/PSOFT/HRMS/SQR/RFL011008_CAN.PDF

USE-REPORT USA
NEW-REPORT $USA_FILE
PRINT 'HRMS REPORT FOR USA' (1,1)

USE-REPORT CANADA
NEW-REPORT $CAN_FI;E
PRINT 'HRMS REPORT FOR CANADA' (1,1)

END-PROGRAM

**Append the command -MR nn to the command line while running a multiple report. Here nn represents the number of multiple reports to be produced. (I was left wondering why only a single report was being produced when I had coded for two reports in the SQR. Adding the -MR 2 command to the command line solved the issue).

SQR Optimisation Techniques

One of the biggest challenges I've faced while writing HRMS and Payroll reports has been the performance tuning of the SQRs. A typical payroll report which needs to display the EMPLID, NAME and Earnings for a certain pay period involves the join of three to four massive transaction tables and the execution time can exceed more than 15-20 minutes regularly. How do we bring it down to an acceptable run time of below 5 minutes? Though its a long deal getting such a drastic performance improvement, I have been able to get significant results by following the below mentioned steps:

1. Check your SQL queries: Most of the performance issue can be attributed to poorly written SQLs. Critically review each SQL statement of your SQR and ensure their correctness.

2. Usage of Temporary Tables: This is number two on my list. A strategy that I adopt when multiple tables have to be joined and the report involves complex processing, is to pull out the master data and insert the same into a temporary table that has been created in the begin-setup section of the SQR. Further processing can be done by referring the data in the temporary table and this reduces trips to transaction tables whereby giving us significant performance enhancements.

3. Usage of arrays: Arrays are excellent instruments to improve the maintainability of the program. I would always use arrays to hold values of totals and other variables that are to be re-initialised multiple times in the report (note that the values of an array can be reset by using the clear-array command). If variables were used instead, the number of variables that would have to be maintained, the initialisation routine etc. would add to the complexity to the program.

4. Dynamic SQLs: The usage of dynamic sqls greatly reduces the length of a program and helps in improving its performance. One requirement I had was to generate two reports with the same output fields and filter criteria, but one sorted by EMPLID and another by NAME. In this case, the only difference was the ORDER BY clause and instead of writing two different BEGIN-SELECTs, the requirement was achieved by maintaining a single BEGIN-SELECT and passing the order by clause as a dynamic SQL.

5. Break down your SQLs: This is something you will find in Peoplebooks also. Make sure that you are querying the database only when necessary. Let's say you have to produce a report which lists the EMPLID, NAME, BIRTHDATE, MANAGER'S NAME and JOBCODE DESCRIPTION. This will involve the join of 4 tables (including PS_NAMES aliased twice) and considering the key structure of these tables, will take considerable execution time. An alternate strategy would be to have a different procedure to get the JOBCODE descr where only the JOBCODE_TBL is queried for instead of making it a part of the join.

6. Minimal code and re-usable logic: The beauty of SQRs is that it is the most flexible tool in the Peoplesoft technical environment (especially considering the rigidity and ready made nature of an AE, CI or Peoplecode) and offers the developer multiple paths to one solution. Evolve a logic that re-uses your procedures and minimises database trips.

What about Load Lookups? Documentation says that Load Lookups give significant performance enhancements. Practically I have not been able to achieve this. The look up gets loaded at the start of the report and the time taken to load the lookup offsets any advantage that this concept delivers. Usage of temporary tables is a much better alternatives - in effect the concept of the two are inherently the same - reduce database trips and querying of large transaction tables.