Friday, November 28, 2008

v8.9 - Time and Labor – Absence Management Integration Technical Details

This document discusses the technical details of the integration of Absence Management with Time and Labor without North American Payroll.
/*3rd October, 2012 - Updated the title to note that this post represents the integration in v8.9. The integration between Time and Labor and Absence Management has been completely revamped in v9.0.*/

Map a TRC to an Absence Management Take Code as shown in the diagram below:



Table that holds the mapping details of a TRC and an AM Take code – TL_ERNCD_TBL.

All the Absence data that will be sent to Time and Labor is stored in the record – GP_ABS_EVT_WRK.
This table is populated through peoplecode (GP_ABS_EVENT.EMPL_RCD.FieldFormula) when a manager approves absences. Note that only data in the above table is taken up by the integration program. So, ensure that all the data that is expected to be sent to Time and Labor is present in this table before running the integration program.
Note that this table is purged after the successful run of the integration program – so it is recommended to take a back up of this table before running the integration process. This table will contain all Added, Deleted and Changed absence data.
Note that this table will be populated only if the employee is active in Time and Labor and has a pay system flag of ‘GP’ in PS_JOB.

Run the ‘Retrieve Absences’ process from Time and Labor, as shown below:



This process triggers the TLAGGUPDATE AE. The main purpose of this AE is to populate the TL_ST_ELPTIME work table from the data present in GP_ABS_EVT_WRK table. This AE takes all absences present in the GP_ABS_EVT_WRK table, checks for the Event Action (whether Add, Change or Delete) and populate the TL_ST_ELPTIME table. The AE also converts the reported days of Absences into corresponding number of hours for use by Time and Labor. This is done by checking the absence days with the schedule for that particular day for the employee.
The TLAGGUPDATE AE calls the TL_AGG_SECTN AE where the insertion into TL_ST_ELPTIME table takes place.
GP_ABS_SUM_WRK and GP_ABS_DTL_WRK are two other important work tables used by the TLAGGUPDATE AE. Note that all the tables mentioned above are truncated after the process runs and the presence of data in the above tables after the run of a program is an indication that insertion into TL_RPTD_TIME has not happened.

The TLAGGUPDATE AE finally calls the TL_ST_LIB AE which transfers data from TL_ST_ELPTIME to TL_RPTD_TIME. The latter AE is triggered using the CreateProcessRequest command and thus it will run as a separate process from the TLAGGUPDATE process.

The time from Absence can be identified by looking for rows with an RT_SOURCE value of AM in TL_RPTD_TIME table.

The below screenshot depicts how the Absence Data appears on the TimeSheet:



Side Notes:
> Check for data in the GP_ABS_EVT_WRK table to estimate the rows that will be sent to Time and Labor.
> Ensure that each employee is assigned to a proper schedule and that the schedules are properly set up.
> Data is loaded into TL_ST_ELPTIME table from where it is loaded into TL_RPTD_TIME by the TL_ST_LIB AE.
> If the process has not run properly GP_ABS_WRK, GP_ABS_SUM_WRK, GP_ABS_DTL_WRK and the TL_ST_ELPTIME tables will have data.
> Check for the loaded rows in TL_RPTD_TIME by looking for a RT_SOURCE = ‘AM’.

Saturday, November 1, 2008

Time and Labor - Absence Management Integration in HRMS 8.9

Its been real long since I've been able to post here! Thanks to the internet security at work!! Thought of writing down my experiences with integrating T&L with Absence Management in 8.9. This is one real exciting area in Time and Labor with more and more customers going in for Absence Management and T&L rollouts in the same implementation. From a business perspective, one cannot really see T&L and AM as different entities, though both have their own unique functions, they (and more so T&L) need to be tightly integrated and share data for meaningful business processing. Let's look at some scenarios where we need the integration:

> Payroll needs both reported time as well as absence data. Depending on the Pay System, T&L and AM data has to be collated for this purpose.

> Users would like to see/report time as well as absence on the same screen (this is one of the major enhancements to T&L in 9.0 with the feature to report/view absence data on the Timesheet. I think its a wonderful feature and a long awaited one).

> A number of T&L rules are dependent on Absence data (for example, don't apply certain rules if the employee was absent on a particular day).

Now, let's explore the integration of these two modules (the discussion is specific to 8.9, I know its a bit anachronistic, but I hope it will be useful to people still on this version!). The integration path varies depending upon the payroll system used. There are two separate processes - one for use if the Pay System is GP and the other if the pay system in NA Payroll (note that there is no delivered integration method if the Pay System is Payroll Interface). What both these processes do is to take absence data and put it into the reported time table. Then what is the difference between these two processes?
The NA Payroll process can be run only after an Absence Run has been finalised and takes data from the GP Positive Input table (GP_PI_GEN_DATA). This ensures that we have the rightly calculated data coming over to T&L. But, for a lot of users this is a major disadvantage of the process, for you cannot view absence data on Time and Labor before an Absence Calendar is finalised!
But, the case is different with the GP integration process. This takes data from a work table (GP_ABS_EVT_WRK) which is populated after a manager approves/deletes/changes absence requests. This means, that Absence data is available to T&L even before the Absence Run is started. But, this comes with a major disadvantage. The absence data that comes to T&L may not be the correct finalised absence data. In one sense its alright if GP is used as the payroll product. Things get murky if one is using a third party payroll system. In this case, the absence data in the payable time table of T&L might be incorrect as the data has come over before the Absence Run has taken place.
This is a major drawback of the current integration and every 8.9 implementation that looks to integrate AM and T&L need to keep this in mind.
I would be interested to know the ways people have worked around integrating T&L and AM with a third party payroll.

Post Note: How to identify the data that has come from Absence Management in T&L?
Query TL_RPTD_TIME for a RT_SOURCE = 'AM'.


Wednesday, August 20, 2008

On Oracle Optimisers

Found this elementary document on Oracle Query Execution and CB and RB Optimisers in scribd - embedding here for posterity.

Read this document on Scribd: Oracle understanding optimizers

Thursday, July 31, 2008

Overview of Time and Labor


Time And Labor Overview

From: jijucvengal, 4 hours ago





First in a series of presentations on Peoplesoft. For more information visit my blog at www.peoplesofthrms.blogspot.com


SlideShare Link




Tuesday, July 29, 2008

Peoplesoft popularity by numbers!

Every other day I indulge in some kind of debate regarding the future of Peoplesoft and the imminent assault of Fusion on the PS breed!! I must say that its murky times for us Peoplesoft consultants though Oracle has gone overboard to support and develop the product. Considering this, it was pleasant to read this post on the Grey Sparling blog about the popularity of Peoplesoft. Some interesting numbers from the article:

  • 9 of top 10 commercial banks are ps customers
  • 59% of top 100 of fortune 500 companies own ps
  • retail - the 5 biggest use ps
  • 6 of top 10 communications companies use PeopleSoft
  • 60% of the top 15 insurance companies use PeopleSoft
  • 70% of top 10 health care organizations use PeopleSoft
  • 19 us states use PeopleSoft
  • 50 of largest counties and cities use PeopleSoft
  • 7 of top 10 research universities use PeopleSoft
  • 8 of top 10 printing and publishing companies use PeopleSoft
Numbers and statistics can be misleading if not read in the correct context, but considering the features of the product, the way Peoplesoft has evolved under Oracle by incorporating so many of the Fusion middleware features and the very vocal commitment of Oracle to Peoplesoft customers tell me that we should be in business for quite some time!

Auto creation of user profiles

There are multiple scenarios where its required to create new user profiles in bulk. A recent requirement I had was to create user profiles for all employees in the system during a pilot upgrade exercise. The best approach to this problem is to achieve the same by a CI built on the USER_SAVEAS component (Copy User profile component).
Was mulling over the various options to auto generate passwords for the user profiles and ran into the delivered function in DERIVED_HINT.EMAILPSWD.Fieldformula event.

Friday, July 11, 2008

SQR Migration Strategies

In continuation to my earlier post on SQRs and XML Publisher, this discussion aims to arrive at a decision matrix to migrate from SQRs to other Peoplesoft/Fusion tools. A number of organisations that run on 8.0, 8.3 and other older versions of Peoplesoft have widely utilised SQRs for non-reporting scenarios like Inbound Interface processing, Mass Updates, Utility processes etc. With the very vocal recommendation of Oracle that customers should refrain from using SQRs (though its unlikely that Oracle will stop supporting SQRs on Peoplesoft any time now) in newer releases, it is extremely prudent to have in place a strategy to move all the SQR programs to an alternate tool. The decision matrix below has been prepared by taking into consideration the various type of SQRs programs as well as the capability/weaknesses of the other tools. This is a very high level illustration and your expert comments and thoughts are invited to further refine this tool.

Type of SQR

Migration Strategy

Inbound Interface – Data source: File

File Layout + AE + CI if insertion is done to a component

Inbound Interface – Data source: Peoplesoft

App Messaging/IB

Mass Data Changes (Mass hires, Mass update of a table etc.)

AE + CI if updation to a component is done. Analyse whether the Peoplesoft Mass Change functionality can be utilized.

Outbound Interface (Flat files, Dataset)

AE + XML Publisher for report generation/ Web Services if 3rd party supports

SQCs

Custom SQCs can be replaced by Application Classes to be invoked from Peoplecode.

Utility programs (Clean up, File manipulation, Audits, Status Checks)

Audits, File Operations, OS related processes are better done by SQRs, retain them. Utilities that affect database can be replaced by AE.

Letters and Forms (online forms as well as letters that are mailed)

XML Publisher + Peoplecode notification classes for mails

Reports – Simple single data source (reports with one BEGIN-SELECT)

XML Publisher

Reports – Simple multiple data sources (reports with more than one BEGIN-SELECT and simple calculations)

Migration of these might require the use of AE to execute the calculations and XML/XSL modifications. Move to XML Publisher if the logic is not complicated, otherwise retain SQRs.

Reports – On-breaks and multiple reports

XML Publishers

Reports – Complex with multiple BEGIN-SELECTS and complex conditional logic

Do not migrate – the effort is not worth it.

Reports – Status of batch process

This category contains the production of error reports and status reports of processing done by an AE or other batch programs – move to XML Publisher.

Thursday, July 10, 2008

Time and Labor Security demystified

Security in Time and Labor is an area that needs more clarity to most new implementors. Almost every team I have consulted for has had the question 'how do I ensure that the employees reporting to a manager shows up in the manager self service pages of Time and Labor?'
This post aims at being a guide towards designing the T&L security setup.

T&L Security is primarily concerned with ensuring the correct access of reportees to managers as well as controlling the dates which are open to any employee to report time. We are more concerned with the first case here.

Security in Time and Labor is a two step process - 1. Grouping the employees reporting to a manager/HR administrator together and 2. Granting access to the correct groups to the respective managers. To group employees together T&L has a powerful feature called Dynamic groups (there is another category called static groups which are not practically used and thus does not deserve mention here) which is similar to the Group Build feature in core HRMS. A Dynamic Group constructs a group of employees based on an SQL selection criteria (for example - select emplid from ps_job where reports_to = ) and is thus truly dynamic. The Dynamic Group once created has to be refreshed on a daily basis to ensure that the group includes all eligible employees.
The illustration below shows how a dynamic group can be created by defining the appropriate SQL eligibility criteria.

After the group has been created, the next step is to assign this group to those managers/administrators who need access to the same. Access in Time and Labor is controlled by an Operator's ROWSECCLASS. So the groups will have to be linked to the eligible ROWSECCLASSES to enable access for managers to their reportee's data. The illustration below shows how a group can be linked to a ROWSECCLASS. Keep in mind that one group can be linked to multiple ROWSECCLASSES and one ROWSECCLASS can be linked to multiple groups.



Now coming to the important question - how does a manager get access to employees under him? Let's just look at the below process flow for that:

1. Manager logs in with his/her Operatorid.
2. The ROWSECCLASS of this Operator is fetched from PSOPRDEFN.
3. The TL_GRP_SECURITY table is queried to get all the groups that this ROWSECCLASS has access to.
4. The TL_GROUP_DTL table is queried to get all the EMPLIDs part of each group fetched in step 3.
5. The employees fetched are displayed on the MSS pages.

This is an extremely simplified definition of what actually goes behind the scenes when the system tries to retrieve the employees in a T&L MSS page. So to ensure that a manager has access to his reportees ensure the following:
a. All the reportees are part of atleast one Dynamic Group. Query TL_GROUP_DTL to get the groups of an employee.
b. The manager/administrator has access to those groups. Query TL_GRP_SECURITY to get all the groups that a particular ROWSECCLASS has access to.
c. Schedule the Dynamic Group Refresh process on a nightly basis to ensure that the dynamic groups are up to date.

Caveat: The above process requires the creation of a large number of ROWSECCLASSES and Dynamic Groups to maintain exclusivity of employee access to managers. Organisations that follow the above approach will need to custom create a program thatwill create new ROWSECCLASSES when a new manager is hired and programatically create dynamic group and assign security to that dynamic group for the new ROWSECCLASS created. This complex process can be circumvented by customising the code behind the 'Get Employees' button present in all T&L MSS Employee Search pages. The approach would be to create a single Dynamic Group and enroll all employee in the same and give access to all managers/administrators for that single group. Customize the Peoplecode behind the Get Employees button to fetch only those reporting to the manager from the entire set fetched by the vanilla logic explained above. The customisation has to be done in FUNCLIB_TL_CP.MGR_SS_FUNC.FieldFormula event.





T&L security design is an area where we see lot of questions being raised, especially considering the integration of AWE in v9.1. If you have specific questions on this, reach out to me @ jiju@hroiconsulting.com

Peoplesoft Time and Labor Setup Guide: Summary

Having completed over seven end to end roll outs in Peoplesoft Time and Labor - my team recently decided to design a quick implementation package for Peoplesoft Time and Labor implementations. This package contains ready made data loading programs for every T&L setup component, a complete T&L design strategy guide which talks about the best practices in designing the various data elements, ready made Peoplesoft HRMS to Time and Labor integration program, an exhaustive Client Questionnaire gathered from our experience that will help the team capture even the smallest client requirement at the earliest stage, a Data Collection form for all T&L components and a reported time mass load program. These tools will drastically reduce the implementation time for any T&L implementation and is a ready reckoner for any organisation going for a T&L implementation.

Pasted below is a summary of the setup sequence as well as set up methodology for all T&L components, an excerpt from the setup guide.

Setup Component

Data Loading Method

T&L Installation

Manual

Time Zone Offset

Manual

Time Periods

Manual

Pay System

Manual

TRC Category

Manual

TRC

AE

TRC Program

Manual/AE depending on volume of data

Compensatory Time Off Plan

AE

Override Reason Code

Manual

Time Reporting Template

Manual

Shifts

AE

Schedule Group

Manual

Schedule Definition

AE

Holiday Schedule

AE

Value List

Manual

Template Built Rule

Manual

Rule Program

Manual

Task Template

Manual

Task Profile

Manual

Task Group

Manual

Time Administration Options

Manual

TL Permission List Security

Manual

Dynamic Group Creation

Manual

Employee Enrolment

AE

Compensation Plan Enrolment

AE

Schedule Assignment

AE

Wednesday, July 9, 2008

SQR and the XML Publisher Juggernaut

I have been surprised at the vigour with which Oracle has been promoting its Fusion reporting tool - XML Publisher. Any documentation/presentation/webcast on Fusion Tools talks in great length about the features and advantages of XMLP, how it has been integrated into Peopletools and more importantly how its going to be the only reporting tool for Fusion moving forward. One of the earliest documents I read on Strategies for Peoplesoft customers to be fusion ready had abandoning the use of SQRs and embracing XMLP as one of the main fusion oriented decisions for a Peoplesoft customer. Toeing this line, almost all new Peoplesoft implementations are religiously rejecting SQRs and embracing XMLP as the standard reporting tool.Is this the end of the road for SQRs? Probably yes!As much as I have seen, XMLP is a wonderful reporting tool. Creating a report using XMLP is many times easier, faster and elegant than SQRs. Its powerful feature that the layout can be designed separately will make it a hit with the business end users - who can now design their own desired layout on a Microsoft Word Document without any help from IT (I had a good laugh when I realised that all that I had to do to add a header with the Company Logo in XMLP was to add a header in MS Word! Huh? That's it?! Good bye to all the SQR coding! At hindsight another major advantage is with spacing the report. Adding a new column to an SQR report will require programatically adjusting the column number of various variables being printed to align to the new layout. But with XMLP, it would just require us to drag the fields to their appropriate places!). The Peoplesoft team at Oracle have done well to integrate XMLP in the Peopletools environment. XMLP is the newest addition to the Reporting Tools in PIA and there are a flurry of new Peoplecode classes (refer Application Packages starting with PSXP_) that facilitate the running of XMLP reports from onlines pages, AEs etc. But is XMLP a complete replacement to SQRs? XMLP taken alone is just a reporting tool, while SQR is a programming language. The flexibility and power of SQRs to handle extremely complex reporting requirements (working with multiple data sets, calculations, conditional logic, usage of data structures etc.) can not be matched by XMLP. For example can I write a XMLP report that fetches all the employees who returned from sick and maternity leave today (for this I will need to fetch the action reason of the LOA row previous to the latest RFL row assuming that sick and maternity are captured using the LOA action) and execute different processing logic depending on the leave reason? No. XMLP is not designed to work with multiple data sets and can not handle such a scenario where conditional logic has to be executed on multiple data sets (first on the current effective dated row to fetch all RFL rows and second to find out those RFL employees who had gone on LOA due to sickness or maternity). This is the forte of SQRs - the ability to handle any complex logic being a procedural language. So the more important question is - is it wise completely abandoning the use of SQRs as a matter of strategy? Both XMLP and SQRs have their own strengths and why can't we continue to use SQRs for requirements that are easier to code and better handled by that tool? For me, the most prudent approach will be to have a mix of both tools and to judiciously choose the right tool for the right requirement.

Custom names for SQR multiple report outputs

I had a requirement to produce two PDF files from a single SQR with specific file names - say RFL011008_USA.pdf and RFL011008_CAN.pdf. While there is significant documentation on generating multiple reports in SQRs, I thought that it would be instructive to share the steps to produce multiple reports with unique output file names.

The file names of the various output files are controlled by the output-file-mode parameter in the sqr.Ini file and it usually follows the convention of sqrname_XX.ext (like sqrname_01.ext, sqrname_02.ext). To override this default file naming convetion, use the NEW-REPORT command.
A sample program to produce multiple reports is as follows:

BEGIN-SETUP

DECLARE-REPORT USA
END-DECLARE

DECLARE-REPORT CANADA
END-DECLARE

END-SETUP


BEGIN-PROGRAM

LET $USA_FILE = /OPT/PSOFT/HRMS/SQR/RFL011008_USA.PDF
LET $CAN_FILE = /OPT/PSOFT/HRMS/SQR/RFL011008_CAN.PDF

USE-REPORT USA
NEW-REPORT $USA_FILE
PRINT 'HRMS REPORT FOR USA' (1,1)

USE-REPORT CANADA
NEW-REPORT $CAN_FI;E
PRINT 'HRMS REPORT FOR CANADA' (1,1)

END-PROGRAM

**Append the command -MR nn to the command line while running a multiple report. Here nn represents the number of multiple reports to be produced. (I was left wondering why only a single report was being produced when I had coded for two reports in the SQR. Adding the -MR 2 command to the command line solved the issue).

SQR Optimisation Techniques

One of the biggest challenges I've faced while writing HRMS and Payroll reports has been the performance tuning of the SQRs. A typical payroll report which needs to display the EMPLID, NAME and Earnings for a certain pay period involves the join of three to four massive transaction tables and the execution time can exceed more than 15-20 minutes regularly. How do we bring it down to an acceptable run time of below 5 minutes? Though its a long deal getting such a drastic performance improvement, I have been able to get significant results by following the below mentioned steps:

1. Check your SQL queries: Most of the performance issue can be attributed to poorly written SQLs. Critically review each SQL statement of your SQR and ensure their correctness.

2. Usage of Temporary Tables: This is number two on my list. A strategy that I adopt when multiple tables have to be joined and the report involves complex processing, is to pull out the master data and insert the same into a temporary table that has been created in the begin-setup section of the SQR. Further processing can be done by referring the data in the temporary table and this reduces trips to transaction tables whereby giving us significant performance enhancements.

3. Usage of arrays: Arrays are excellent instruments to improve the maintainability of the program. I would always use arrays to hold values of totals and other variables that are to be re-initialised multiple times in the report (note that the values of an array can be reset by using the clear-array command). If variables were used instead, the number of variables that would have to be maintained, the initialisation routine etc. would add to the complexity to the program.

4. Dynamic SQLs: The usage of dynamic sqls greatly reduces the length of a program and helps in improving its performance. One requirement I had was to generate two reports with the same output fields and filter criteria, but one sorted by EMPLID and another by NAME. In this case, the only difference was the ORDER BY clause and instead of writing two different BEGIN-SELECTs, the requirement was achieved by maintaining a single BEGIN-SELECT and passing the order by clause as a dynamic SQL.

5. Break down your SQLs: This is something you will find in Peoplebooks also. Make sure that you are querying the database only when necessary. Let's say you have to produce a report which lists the EMPLID, NAME, BIRTHDATE, MANAGER'S NAME and JOBCODE DESCRIPTION. This will involve the join of 4 tables (including PS_NAMES aliased twice) and considering the key structure of these tables, will take considerable execution time. An alternate strategy would be to have a different procedure to get the JOBCODE descr where only the JOBCODE_TBL is queried for instead of making it a part of the join.

6. Minimal code and re-usable logic: The beauty of SQRs is that it is the most flexible tool in the Peoplesoft technical environment (especially considering the rigidity and ready made nature of an AE, CI or Peoplecode) and offers the developer multiple paths to one solution. Evolve a logic that re-uses your procedures and minimises database trips.

What about Load Lookups? Documentation says that Load Lookups give significant performance enhancements. Practically I have not been able to achieve this. The look up gets loaded at the start of the report and the time taken to load the lookup offsets any advantage that this concept delivers. Usage of temporary tables is a much better alternatives - in effect the concept of the two are inherently the same - reduce database trips and querying of large transaction tables.


Learnings from a multi country phased out Peoplesoft HRMS implementation

  • Understand the overall architecture and AS-IS structure of the various countries. Even if a phased out implementation strategy is sought out, it is extremely important to understand the working of the various countries in as much detail as possible. This will ensure that no major design changes are required at a later phase. It is a common observation that projects planned in silos (drawing up a design document with just one country in mind) run into major design issues at later phases as the initial design does not cater to the later phases.

  • Traditional Waterfall or other sequential styles of project management does not work. This is especially true in cases of a new implementation/ migration from a legacy system to Peoplesoft scenarios. The learning curve of the users in the new system is large, their expectations are hinged on their experience of the old system and they are unaware of the capabilities of the current system. Fundamentally the users will be in a very fuzzy state and it takes time for them to come to terms with the new system. So don’t expect your requirements to be frozen. Any plan that looks at freezing requirements after the analysis phase will fail.

  • Get the user to participate in development early. This is one of the most crucial points in a multi country implementation. There will be a number of screens/ functionalities that will be common across the various countries – thus its extremely important to let the users participate actively in the development of those common functionalities. A common mistake is to involve only the users of the current countries being implemented in the testing phase and realize that users of other countries have different requirement on the same panels later. The rework required at this stage can be damaging. To prevent such a happening – involve users of all countries in the development of common panels and functionalities.

  • Evolutionary Prototyping is a wonderful model to use in this case (development of common screens and functionalities). Evolutionary Prototyping is a concept where the development team comes up with early prototypes (even when the requirements are not completely finalized) and lets the users test the same and come with suggestions. The prototype evolves in this manner and finally the desired product results. This is an extremely powerful method for the development of user screens and common functionalities.
  • Different Project Management models for various phases. The nature of implementation varies from phase to phase. The initial phase represents a period of high risk, uncertain requirements, skeptical customer, inexperienced team members etc. The project becomes more predictable and stable as the phases progress. It makes lot of sense to use different Project Management models at the initial and later stages. Its also effective to divide the initial phase into overall system architecture analysis into one part and phase specific analysis, design and development into another. The System Analysis phase should adopt a combination of Spiral and Waterfall model. As this is a purely analysis phase, the waterfall model gives great control over the progress. The Spiral method concentrates on a specific phase with emphasis on risks and tackling the risks. For the second part, abandon the Waterfall approach and go for a combination of Spiral and Customer Oriented Development methods. This will ensure that optimum development speed is achieved with customer involvement. As the project phases out, we have the option of going for a Waterfall model or an Agile model. Waterfall because the project is more predictable and this will enable good control over various phases. But this is not a high productivity model as its extremely rigid. So its best to go for an Agile method (XP/Scrum etc.). Agile models are development oriented models. It places great onus on the capability, experience and commitment of developers. This will leverage the experience of the developers as well as deliver the product at a faster rate. One of the best practices in case of a phased roll out is to leverage the learning of the past phases in the current and future phases. Each phase in the migration will almost follow the same steps. Project management can ease out if effort is made to evolve a specific migration process document that can be followed for the coming phases.
  • Never leave the most complex roll outs to be the last. Though it’s a good idea to start off implementing the easiest country first (will let the developers get a good idea of the architecture, gain confidence, evolve processes and more importantly give customer confidence), it can be disastrous to put off the implementation of the toughest phase till the end. The reasons for that are as below:
  • This phase might have the most complex architecture and if not properly designed, it might require major redesign to the existing architecture. It is easier to make a change to the design earlier in the project.
  • The most complex phase will require maximum effort and user intervention. Its always better finishing such tasks earlier in the project as you can catch up for lost time later in the project with easier tasks.
  • Motivation levels of employees fag with the timeline of the project. Make use of the initial excitement and energy to get the toughest phase implemented.
  • Any unforeseen event will be more difficult to handle in the end (performance drop, code overwrite, attrition etc.)
  • Planning for the Roll Out dates. A multi country implementation that is spread across more than one fiscal/calendar year need to plan the rollout dates prudently. Certain points to be considered:
  • Will it affect Year End processing? If so, what is the window of time when migration cannot take place? Should all migration happen before Year End? Is it ok to process Year End in two separate systems? This has to reflect in the project plan.
  • Will scenarios where an employee has data in two different systems affect any processing? For example, let’s say the project involves the migration of two countries from a legacy system to an ERP. Suppose the migration of one country has been completed and suppose an employee is transferred from the second country (legacy) to the new country (ERP) after the Go Live. Now the employee data is spread across the two systems for the current year. Will there be any process (for example Payroll processing/ Leave Processing) that looks at the cumulative data of the employee for processing? This is a major design question, but generally applicable to all implementations.
  • Serial-Parallel-Series approach. Can we have a mix of Phase wise and Big Bang approach? This would be the optimum plan. Ideally the project should start with one current phase (Series) and once the team gathers significant experience of the system, more than one country/phase can be implemented in parallel. This will help speed up the complete migration process. A final approach could be looking at the application in general after the implementation of all the phases and pitching in for system wide improvements (Series).
  • An alternate phase wise implementation strategy is given below. This method is built on the fact that any phase wise implementation can be thought of as a Wheel and Spoke model (refer the illustration below). The system consists of a Central Core that encompasses common setups, screens, data, reports and architecture. All the other phases can be visualized as plug-ins to the core. Thus, the critical activity is getting the Central Core setup and deciding on the integration points for various plug ins. Once this is designed and developed, any number of plug-ins can be seamlessly integrated to the core. Consider the design and development of the core itself as a separate phase. This lays the foundation of a very stable integration of various phases. Combining the Series-Parallel approach to this would mean that the Central Core is implemented first, followed by Phase I in a Series manner. Now that the team has sufficient experience and processes being mature, Phase II and Phase III implementations can be taken up in parallel.

Monday, July 7, 2008

Query to get the number of working days between two given dates

A friend of mine talked about a requirement where reminder mails had to be sent out to managers who had leave requests from their subordinates which were unapproved for the last five 'working days'. Working Days meant all days between two days excluding saturday, sunday and statutory holidays.
While this can be easily achieved by a two step process, i.e. - looping through all days between the two given days and checking individually whether the day is non-working ir not, I felt that it would be interesting to get the desired output in a single SQL query.
This is what transpired:
Pseudo Code:
1. Get all the days between the two dates in an inline view.
2. Exclude saturdays and sundays from the above result set.
3. Exclude all holidays that fall in the given date range by checking the holiday schedule of the employee.

The below query fetches the working days between 01 Jan 08 and 10 Jan 08. Note that the logic to exclude holidays is constructed from a Peoplesoft point of view.
select daynum from
(select to_date('31-DEC-2007','DD-MON-YYYY') + rownum as daynum
from
all_tables
where
rownum<=(select to_date('10-JAN-2008','DD-MON-YYYY') - to_date('31-DEC-2007','DD-MON-YYYY') from dual))
where to_char(daynum,'DY') not in ('SAT','SUN')
and daynum not in (select holiday from ps_holiday_date where holiday_schedule = 'CHEG' and holiday between '01-JAN-2008' and '10-JAN-2008')

First and Last Dates of a Quarter: Oracle Query

Below is one of the many queries (Oracle specific syntaxes) one can write to get the first and last days of all the four quarters in any year:

select
add_months(trunc(sysdate,'Y'),(3*(rownum-1))) ,last_day(add_months(add_months(trunc(sysdate,'Y'),(3*(rownum-1))),2))
from
all_indexes
where
rownum<5

Monday, June 30, 2008

Do Actions in an Application Engine: A Closer Look


I have sometimes found the Do actions in an Application Engine to be pretty confusing – especially the nature of iterations that happen with each of these actions. I thought it would be informative to share some insights into the working of these actions in an AE program.

Consider the following scenario:

If we have a select statement as below:

Select emplid from PS_JOB where action_dt = sysdate order by emplid.

Suppose executing this query on your DB returns 10 rows. To understand the working of the Do actions, if I embed this query in an AE and print the emplid in a file, how many EMPLIDs will be printed in case of each Do actions?

My AE step would looks like this:



The Do When in the above diagram can be replaced by Do While and Do Select, while checking for the number of iterations. The Do Actions will contain the SQL select statement as mentioned before and the Peoplecode will write the Emplid value to a File. In case of Do Until, Peoplecode will precede Do Until (as Do Until is the last action).

Coming back to our original question.

How many times will the EMPLID be written in the file in each of these cases??

Do When: Do When is a loop entry criteria. This will always be executed once and only once as long as the SQL statement fetches a row. Thus, with Do Until, one row will be printed. This is pretty straight forward.

Do While: All those of you who thought that the Do While loop will iterate 10 times will need to rethink the working of this loop. Do While will c

ontinue executing until at least one row is fetched. The background execution logic of a Do While loop is as follows: Every time, the AE executes the SQL select statement in the Do While Action, if it returns a row it will proceed to further actions, if not the step will be terminated. What is to be understood here is that, in the absence of a loop termination criteria (or in other words logic to discard already selected rows) the Select stat

ement inside a Do While will always fetch the same row. This will mean that a query like the one we have written (select emplid from ps_job) will go into an infinite loop, as there is no termination criterion for this SQL. Thus, this will go into an infinite loop and the same EMPLID (the first Id returned by the query) will be printed in the file. Always keep in mind that a loop termination criterion is absolutely necessary in case of a Do While.

Do Until: Do Until is uncomplicated. It will execute until a row is returned by the query and this ensures that the step is executed atleast once. As the query returns a row, the loop gets terminated after the first iteration itself.

Do Select: Do Select is the most commonly used Do action. But I left Do Select for the last because even this is kind of tricky like Do While. I do not know how many of you have noticed the various types of a Do Select action:



What concerns us out here is the difference between Select/Fetch and Reselect. Will there be any difference if I run the Do Select in these two different modes?

Yes, the working of the Do Select is completely different in these two cases. In case of a Select/Fetch the loop will iterate 10 times, while for Reselect an infinite loop is triggered (exactly similar to a Do While). The reason lies in the difference in working of these two Do Select types. Select/Fetch hits the Db just once, fetches all the values of the Select statement and puts it in a cursor and moves through the cursor during each iteration. This allows a sequential processing which is so integral to row by row processing in an Application Engine. While Reselect queries the DB for each iteration and picks up the first row returned by the select statement and puts it in a cursor. Thus, at any given point of time there will be only one row in the cursor of a Reselect type Do Select action. Moreover, Reselect will be slower than a Select/Fetch due to the DB trips involved in the former. Why does Reselect go into an infinite loop? This happens because for each iteration, the same SQL select statement is fired and the row returned first is buffered into the cursor (unlike a Select/Fetch where the SQL is executed against the DB once and then the AE moves successively through the cursor and terminates once it reaches the end of the cursor). Thus, in case of a Do Select of Reselect type, we will get a result similar to Do While – an infinite loop with the same EMPLID printed.

So what is the difference between a Do Select of Reselect type and a Do While? Remember that Reselect is used when the Restart function of an AE is turned on. A Select/Fetch Do Select does not execute commits for the entire action and thus will not be useful in case of a restart. This deficiency is overcome by the Reselect type, where commits are executed during the Step execution. At hindsight, I would always recommend to use the Restartable function (which is similar to Select/Fetch but with commits turned on) over the Reselect option. Coming to the difference between a Do Select (Reselect) and a Do While boils down the elementary question – why do we have two separate actions if they perform the same function? Do While and Do Select are fundamentally different, the first is similar to a FOR loop (executes for a definite number of times), while the latter is similar to a WHILE loop (executes till a condition is true). That is exactly the reason why a Do While comes before a Do Select.

Monday, June 23, 2008

What's new with Time and Labor 8.9? Schedules

First off - Time and Labor 8.9 is not vastly different from Time and Labor 8.8. There are no breakthrough changes and transitioning to the new version is seamless. That is not to say that the product has no enhancements at all! This post discusses one important feature change that has happened - Setting up and using Work Schedules.

8.8 Scenario:
In 8.8 Schedule creation was a 4 step process. The steps follo
wed were:
Create Shifts --> Create Workdays --> Create Schedule Template --> Create Schedule Definition.
After creating the schedules, a Schedule Calendar Build process had to be run to create the whole Schedule Calendar.


8.9 Scenario:
Creating a schedule in 8.9 has been reduced to a One Step process!! In 8.9, creation of shifts and workdays have been made optional and the concept of a schedule template itself has been removed. In 8.9, one can create a schedule by directly entering the work hours of each day on the Schedule Definition page. This has greatly simplified the creation of Schedules in 8.9. Another caveat is that the Schedule Calendar creation process has been obsoleted. It is no more necessary to build a schedule calendar now, the calendar is auto-built by a view 'SCH_CLND_DTL_VW'.


New Feature: The most significant change in 8.9 related to schedules is the inclusion if a new definition - Schedule Group. In 8.9 and above, it is mandatory to create a schedule group before a schedule definition is created. One Schedule Group has to be created for each Setid. As all schedule definitions are keyed by Setid, a Schedule Group will act as a higher key to control the access to schedules to various users. The Schedule Group set up page is shown below:


This new definition has the following functional fallouts:
  • A new field has been added in the Workgroup definition page called Schedule Group and it has to be filled before selecting the appropriate Work Schedule.
  • Schedule Group has been added as a new field in the 'Assign Work Schedule' and 'Assign Work Schedule to Group' pages. In these pages also, a Schedule Group has to be defined first before entering the Schedule id value.
In short - the major changes in Time and Labor 8.9 for Schedules are:

  • A single step schedule creation process compared to a 4 step one.
  • A new definition called Schedule Group introduced.
  • Schedule Assignment at employee, group and Workgroup level requires an extra field now - Schedule Group.
  • New page in ESS and MSS to view the schedule assignment of the complete month
Major tables related to schedules:
Shift Definition - PS_SCH_SHFT_TBL
Schedule Defn. - PS_SCH_DEFN_TBL, PS_SCH_DEFN_DTL.
Sched. Calendar View - PS_SCH_CLND_DTL_VW
Schedule Assignment - PS_SCH_ASSIGN





Thursday, June 19, 2008

Designing Workgroups in Time and Labor

Workgroup is a nodal definition in Time and Labor. It is synonymous to a Paygroup in Payroll and Absence Management and every valid Time and Labor employee has to be attached to a Workgroup. Time Reporters can inherit a number of features including their Work Schedule, Holiday Schedule, TRCs they have access to, the Time processing rules applied to them etc.from the workgroup. Designing workgroups and employees eligible for each workgroup is a critical aspect of any Time and Labor. Here are few thumb rules for designing workgroups:

  • A Workgroup only contain either Positive Time Reporters or Elapsed Time Reporters - never a combination of both.
  • All employees in a workgroup will have access to the same set of TRCs.
  • All employees in a workgroup will be subject to the same Time Processing rules.
  • All employees in a workgroup should have the same work Period (Weekly, Biweekly, Monthly etc.)
  • Not a robust design to create separate workgroups for different schedules. A schedule can and should ideally be directly attached to an employee.
  • If the definition of the day breaker varies for various employees in your organisation, then it might be required to group employees together based on the day breaker requirements. Thanks to Galego for bringing this point to notice.
In short - group together all employees who access the same TRCs and have the same time and labor rules together into one workgroup (also ensure that all employees are either Elapsed or Punch in a workgroup and share the same period id). All other definitions like a Work Schedule, Holiday Schedule, Compensatory Time Off plans etc. can be attached to the employee at other levels. A number of implementations have ended up having an unmanageably large number of workgroups as the creation of a workgroup depended on a schedule, location etc.

This approach will greatly reduce the number of workgroups that will have to be configured and increase the maintainability of the application. Workgroup is inherently a definition local to Time and Labor whose main purpose is to ensure that the correct time processing rules are applied during the TIMEADMIN process. The performance of the TIMEADMIN process can also be significantly improved by decreasing the number of workgroups.

Power Packed Time and Labor

Time and Labor is the native Peoplesoft application that captures the ‘time-at-work’ of your employees. It is an extremely powerful and comprehensive application that can greatly simplify effort capturing, reporting and interfacing requirements of your organizations. The innate capabilities of this product make it an extremely strong choice for the time capturing requirements of your organization. So what makes Peoplesoft Time and Labor, the preferred product?

Employee’s Perspective:

Ø Feature to enter time for any preferred time period. This makes time reporting seamless for employees. Employees can choose to enter time for a Day, Week, Month or any other period that they wish to choose.

Ø Feature to enter Mass Time. The Mass Time feature lets employees handle scenarios where they want to report a certain effort/time-off for an extended period of time (like vacation for one month). Complete time entry for such a scenario can be completed over three mouse clicks! It does not get easier than this!

Ø Pre-delivered reports: Peoplesoft Time and Labor come with powerful reports that show employees the summary and detail of their reported time on a daily, weekly and monthly basis. The employees can even compare their reported time to the scheduled time for any given day.

Ø Excellent notification features: Peoplesoft Time and Labor will notify employees when their manager approves/denies any of their reported time. This is again highly configurable and can be turned off if not required.

Ø Feature to view Payable Components separately: Not all reported time may be paid out to the employees and at times, the time reporting rules might create certain payable time for the employees. Peoplesoft Time and Labor provides separate reports to view the time that will be paid out to payroll.

Ø Mobile Time Feature: A powerful feature that lets employees report time on the move. Employees can report time while not connected to the Peoplesoft environment through their PDAs, Laptops etc.