Pages

Friday, March 8, 2013

Lot Genealogy (Part 1): Building a Cache From R12 Data

UPDATE: 15-APR-2013 fixed a bug with WIP Issue transactions, added in some more logging (see source code).

If you have had any opportunities to work using Oracle Process Manufacturing one of features that Oracle have removed in R12 is reporting on Lot Genealogy, specifically the ability to look at the entire genealogy for a lot.

I've tackled this issue previously using Noetix (see here) and this is the first part of a series in which I'll attempt to provide a means for generating a cache for the lot genealogy information that you can report on (rather than having to build the genealogy at the time you run the report) without using Noetix.

So here's the plan;

Lot Genealogy Cache: Process Overview
This plan is based on the established Lot and Batch cycle;

PO > Lot > Batch Cycle
It's the iterative nature of lots > batches and batches > lots that causes a lot of problems when you're writing reports. The key things the process above includes that the simplified diagram doesn't is the ability to reverse a the creation of batches from a lot and the consumption of batches into lots.

Looking at the process overview the first part we need to worry about is the second box down (Get Next Transaction Data). What we need to do in this process is get a sequential list of all transactions since the last time we ran the process. The SQL below will do this;

SELECT MMT.TRANSACTION_ID,
       GBH.Batch_No,
       MTLN.LOT_NUMBER,
       MTLN.TRANSACTION_QUANTITY,
       MTT.TRANSACTION_TYPE_NAME,
       MSIB.SEGMENT1             Item_Number,
       MSIT.DESCRIPTION          Item_Description,
       MSIB.ITEM_TYPE            Item_Type
  FROM INV.MTL_TRANSACTION_LOT_NUMBERS MTLN,
       INV.MTL_MATERIAL_TRANSACTIONS   MMT,
       INV.MTL_SYSTEM_ITEMS_B          MSIB,
       INV.MTL_SYSTEM_ITEMS_TL         MSIT,
       INV.MTL_LOT_NUMBERS             MLN, -- Used to identify lots that have been disabled
       GME.GME_BATCH_HEADER            GBH,
       INV.MTL_TRANSACTION_TYPES       MTT
 WHERE 1 = 1
   AND MTLN.TRANSACTION_ID = MMT.TRANSACTION_ID
   AND MTLN.INVENTORY_ITEM_ID = MMT.INVENTORY_ITEM_ID
   AND MTLN.ORGANIZATION_ID = MMT.ORGANIZATION_ID
   AND MTLN.TRANSACTION_SOURCE_TYPE_ID = 5 /* Job Or Schedule */
   AND MMT.INVENTORY_ITEM_ID = MSIB.INVENTORY_ITEM_ID
   AND MMT.Organization_Id = MSIB.Organization_Id
   AND MSIT.INVENTORY_ITEM_ID(+) = MSIB.INVENTORY_ITEM_ID
   AND MSIT.ORGANIZATION_ID(+) = MSIB.ORGANIZATION_ID
   AND MMT.Transaction_Source_ID = GBH.Batch_Id(+)
   AND MMT.TRANSACTION_TYPE_ID = MTT.TRANSACTION_TYPE_ID(+)
   AND MTLN.Lot_Number = MLN.Lot_Number
   AND MTLN.Inventory_Item_Id = MLN.Inventory_Item_Id
   AND MTLN.Organization_Id = MLN.Organization_Id
   AND MLN.Disable_Flag IS NULL;


Rather than embedding this into our process I think a good idea would be to create an external view (which I'm going to call LOT_GENEALOGY_GETTRANSACTIONS). The source code file for this is available here (via Google Drive).

The next thing we need to consider is how we're going to store the information in the cache itself. The following are source files for each required object;
Assuming you've built the view and the tables above the main package can then be built. The source is here.

Once that's been built you need to configure the settings table;

insert into LOT_GENEALOGY_SETTINGS (setting_name, value_number, value_text, value_date)
values ('MAX_TRANS_NO', -1, null, null);
insert into LOT_GENEALOGY_SETTINGS (setting_name, value_number, value_text, value_date)
values ('LOGGING', null, 'YES', null);
commit;

And then you can run the stored procedure;

begin
  -- Call the procedure
  lot_genealogy.refreshwholecache;
end;


That will populate the cache for the first time, you then need to schedule;

begin
  -- Call the procedure
  lot_genealogy.doupdate;
end;


Which will incrementally update the cache based on new transactions. We schedule this to run every 30 minutes but it will depend on your volume of transactions as to how often you want it to run!


No comments: