Monday, August 24, 2015

Load multiple target table using same source table

How to load multiple target table using same source relational table

Step 1: Create target table list flat file and keep all the target table name in which you want to load data.

Step 2: use previously created target table list file as source definition for this mapping, create one input/output port table_name in flat file source definition.

Step 3: drag this source definition in mapping and create SQL transformation after source qualifier with default options, drag and drop port from SQ to SQL T/R.

Step 4: Edit SQL T/R and goto SQL setting tab and enable "Add statistic output port", it will create "NumRowsAffected" port in SQL T/R output port and this ports returns
Total number of rows affected by SQL statements for each input row.

Step 5: Now click on SQL ports tabs and click on SQL query edit box to insert data into different target tables
Click on String Substitution under port in this window and select your port table_name here and provide your query here
SQL query will be like
INSERT INTO ~table_name~ (col1, col2, col3...Coln) select (col1, col2, col3...coln) from source_table
~table_name~ this value will change based on outputreceived from flat file
Here you can't use update query, only insert statement along with select statement.

Step 6: Now we will create expression transformation for row level operations drag all ports from SQL T/R to expression T/R, here we will create
Another output port for target load time either you can use expression "sysdate" or "SESSIONSTARTTIME" for target load time update
In another output port will write in expression to figure out any sql error occured
IIF (ISNULL (SQL_error_input),'No Error', SQL_error_input) and connect all ports to target table SQL_Target_Load_audit.

Step 7: Now we will create one Audit table which will take care all your target load logs every time
And this table will be target for this mapping, if any load fails it will write SQL errors in SQL table

By doing this you can verify your target load time, if any SQL error and no of record processed in each run.

Wednesday, August 19, 2015

Code Deployment in Informatica

Migrate/Deploy Code in Informatica
1. Using Deployment Groups: Creating deployment group in repository manager including all objects into other environment suppose development environment to QA environment this is implemented where version control is used so that later if we want to roll by code to old code, should be able to roll back to previous version.
Below are steps to create deployment groups
In power center repository manager under Tool>deployment> Group click on new provide name select based on your requirement static/dynamic and provide permission on this then start copying objects into newly created deployment group after copying all objects.
A. static deployment group: we manually add objects to static deployment group, dependent objects can be configured automatically, ex- if one mapplet used in various mappings it will automatically added to deployment group
B. dynamic deployment group: we can configure deployment group with query to enable informatica to add object in dynamic type deployment group, we can build query during dynamic deployment group creation.
We can start deploying groups to new environment or QA Environment.
2. Import and export as XML: Here all objects can be exported as XML to destination directory then later on we can import those object in destination environment.

3. Copy/Paste: copy all objects and paste to destination repository using power center repository manager make sure that first copy source and target definition and then mapping, session and workflow sequential order.

Important points during code migration:

Source and target definition not available in Deployed environment: While copying mapping if those source and target definition not available in target environment then mapping get copied with previous source and target shortcuts, we have to make sure before migrating mapping we have to create source and target definition first.
Impacts post migration: Before migration it is necessary to check all source/target that getting updated, those changes might cause other mappings to fail.
Connection setup missed: As database changes in QA and pro in this case your mapping will fail in production
So in this case after migration we have to make sure we have proper database connection with permission.
Permission on Target environment objects: After migration we have to make sure, proper permission given in all the folders to user and admin.
Parameter/Variables: We have to change parameter and variables as well in parameter file after creating it in target environment.
Unix/Shell script: Path in shell script as well as access to all user been done.
Recreating Views: As base table got change we have to create view again for new base tables, which is going to be used in target environment.
Creating indicator file: Creating indicator file or getting indicator file from upstream we have to make sure we have done required changes.

Job Schedule: we have to do required changes for job schedule as we have multiple workflow dependent each other we have to make sure we have proper job scheduling.

Friday, August 14, 2015

Degenerated Dimension

Degenerated Dimension: A degenerated dimension is a dimension key in the fact table that does not have it's own table
A Degenerate dimension is a Dimension which is having single attribute. This dimension is represented as a single value in a fact table.
The data items that are not facts and data items that do not fit into the existing dimensions are called as Degenerate Dimensions.
In below example only one key column which is not associated with any other dimension.
the product_no belongs to product dimension and user_id belongs to user dimension but here Invoice_No not belongs to any dimension it can be treated as primary key for fact table

Degenerate Dimensions are fastest way to group similar transactions.Degenerate Dimensions are used when fact tables represent transaction data.
They can be used as primary key for the fact table but they cannot act as foreign keys.
Pros
1. It's avoiding expensive join between two table with a one-many relationship.
2. By degenerating date dimension, we can avoid need of join between event dim and time dimension.
Cons: It will increase fact table size.

Constraint Based Loading

Constraint Based Loading is used to load data first in parent target table and then child target table, we can specify constraint based loading option in session properties under config tab.
Integration service process record row by row basis, for every row generated by source the integration service first load data into primary key table then foreign key table.
It is helpful to normalize data from de-normalized source data.

important points:
1. constraint based loading option only for insert.
2. can not update or delete rows with this option.
3. it can be implemented only when target having primary and foreign key relationship.
4.target table must be in the same target connection group.
For update and delete using constraint based load, informatica provides option to insert, updates and deletes in target table in other words complete constraint based loading can be enabled by specify custom properties "FullCBLOSupport=Yes" under config tab of session.





We can specify this property in integration service which make it applicable for all session and workflows.
If we are using complete constraint based loading mapping should not contain active transformation which can change Row ID value (Agg, Joiner,Normalizer,Rank,sorter).

Using Row id we can update,delete and insert data into target.

Monday, July 6, 2015

Exceptions in Informatica

Informatica Exceptions – 1
1. While doing “Generate SQL” in Source Qualifier and try to validate it, getting below error.
“Query should return exactly n field(s) to match field(s) projected from the Source Qualifier” Where n is the number of fields projected from the Source Qualifier.
Possible reasons for this to occur are:
1.    The sequence of ports wrong
2.    The number of ports in the transformation may be more/less.
3.    Sometimes you will have the correct number of ports and in correct order too but even then you might face this error in that case make sure that Owner name and Schema name are specified correctly for the tables used in the Source Qualifier Query.
4.    E.g., TC_0002.EXP_AUTH@TIMEP
2. When an Oracle table is used, getting below error
“[/export/home/build80/zeusbuild/vobs/powrmart/common/odl/oracle8/oradriver.cpp] line [xxx]”
Where xxx is some line number mostly 241, 291 or 416.
Possible reasons are
Use Data Direct Oracle ODBC driver instead of the driver “Oracle in
If the table has been imported using the Oracle drivers which are not supported, then columns with Varchar2 data type are replaced by String data type and Number columns are imported with precision Zero(0).
3. below error while trying to save a Mapping.
Unexpected Condition Detected Warning: Unexpected condition at: statbar.cpp: 268 Contact Informatica Technical Support for assistance
When there is no enough memory in System this happens. To resolve this we can either
1.    Increase the Virtual Memory in the system
2.    If continue to receive the same error even after increasing the Virtual Memory, in Designer, go to Tools->Options, go to General tab and clear the “Save MX Data” option.

Informatica Exceptions – 2
1. When Session fails in informatica with the below error message. 
“FATAL ERROR: Caught a fatal signal/exception
FATAL ERROR: Aborting the DTM process due to fatal signal/exception.”
There might be several reasons for this. One possible reason could be the way the function SUBSTR is used in the mappings, like the length argument of the SUBSTR function being specified incorrectly.
Example: 
IIF (SUBSTR (MOBILE_NUMBER, 1, 1) = ‘9’, SUBSTR (MOBILE_NUMBER, 2, 24), MOBILE_NUMBER)
in this example MOBILE_NUMBER is a variable port and is 24 characters long. When the field itself is 24 char long, the SUBSTR starts at position 2 and go for a length of 24 which is the 25th character.
To solve this, correct the length option so that it does not go beyond the length of the field or avoid using the length option to return the entire string starting with the start value.
Example: 
In this example modify the expression as follows: 
IIF(SUBSTR(MOBILE_NUMBER, 1, 1) = ‘9’, SUBSTR(MOBILE_NUMBER, 2, 23), MOBILE_NUMBER)
OR
IIF(SUBSTR(MOBILE_NUMBER, 1, 1) = ‘9’, SUBSTR(MOBILE_NUMBER, 2), MOBILE_NUMBER).
2. The following error can occur at times when a session is run
“TE_11015 Error in xxx: No matching input port found for output port OUTPUT_PORT TM_6006 Error initializing DTM for session…”
Where xxx is a Transformation Name.
This error will occur when there is corruption in the transformation. To resolve this do one of the following: * Recreate the transformation in the mapping having this error.
3. At times you get the below problems,
1. When opening designer, you get “Exception access violation”, “Unexpected condition detected”.
2. Unable to see the navigator window, output window or the overview window in designer even after toggling it on.
3. Toolbars or checkboxes are not showing up correctly.
These are all indications that the pmdesign.ini file might be corrupted. To solve this, following steps need to be followed.
1. Close Informatica Designer 2. Rename the pmdesign.ini (in c:\winnt\system32 or c:\windows\system). 3. Re-open the designer.
When PowerMart opens the Designer, it will create a new pmdesign.ini if it doesn’t find an existing one. Even reinstalling the PowerMart clients will not create this file if it finds one.

Informatica Exceptions – 3
1. Occasions where sessions fail with the following error in the Workflow Monitor: 
“First error code [36401], message [ERROR: Session task instance [session XXXX]: Execution terminated unexpectedly.] “
Where XXXX is the session name.
The server log/workflow log shows the following: 
“LM_36401 Execution terminated unexpectedly.”
To determine the error do the following: 
a. if the session fails before initialization and no session log is created look for errors in Workflow log and pmrepagent log files.
b. If the session log is created and if the log shows errors like
“Caught a fatal signal/exception” or
“Unexpected condition detected at file [xxx] line yy”
Then a core dump has been created on the server machine. In this case Informatica Technical Support should be contacted with specific details. This error may also occur when the PowerCenter server log becomes too large and the server is no longer able to write to it. In this case a workflow and session log may not be completed. Deleting or renaming the PowerCenter Server log (pmserver.log) file will resolve the issue.
2. Given below is not an exception but a scenario which most of us would have come across.
Rounding problem occurs with columns in the source defined as Numeric with Precision and Scale or Lookups fail to match on the same columns. Floating point arithmetic is always prone to rounding errors (e.g. the number 1562.99 may be represented internally as 1562.988888889, very close but not exactly the same). This can also affect functions that work with scale such as the Round() function.To resolve this do the following:
1.    Select the Enable high precision option for the session.
2.    Define all numeric ports as Decimal data type with the exact precision and scale desired. When high precision processing is enabled the PowerCenter Server support numeric values up to 28 digits. However, the tradeoff is a performance hit (actual performance really depends on how many decimal ports there are).

Oracle Exceptions in Informatica
Normally, a fatal Oracle error may not be registered as a warning or row error and the session may not fail, conversely a non-fatal error may cause a PowerCenter session to fail.This can be changed with few tweaking in
A. Oracle Stored Procedure,
B. The OracleErrorActionFile, and
C. Server Settings
Let us see this with an example.
An Oracle Stored Procedure under certain conditions returns the exception NO_DATA_FOUND. When this exception occurs, the session calling the Stored Procedure does not fail.
Adding an entry for this error in the ora8err.act file and enabling the OracleErrorActionFile option does not change this behavior (Both ora8err.act and OracleErrorActionFile are discussed in later part of this blog).
When this exception (NO_DATA_FOUND) is raised in PL/SQL it is sent to the Oracle client as an informational message not an error message and the Oracle client sends this message to PowerCenter. Since the Oracle client does not return an error to PowerCenter the session continues as normal and will not fail.
A. Modify the Stored Procedure to return a different exception or a custom exception. A custom exception number (only between -20000 and -20999) can be sent using the raise_application_error PL/SQL command as follows:
raise_application_error (-20991,’ has raised an error’, true);
Additionally add the following entry to the ora8err.act file:
20991, F
B. Editing the Oracle Error Action file can be done as follows:
1. Go to the server/bin directory under the Informatica Services installation directory (8.x) or the Informatica Server installation directory (7.1.x).
E.g.,
For Infa 7.x
C:\Program Files\Informatica PowerCenter 7.1.3\Server\ora8err.act
For Infa 8.x
C:\Informatica\PowerCenter8.1.1\server\bin
2. Open the ora8err.act file.
3. Change the value associated with the error.
“F” is fatal and stops the session.”R” is a row error and writes the row to the reject file and continues to the next row.
Examples:
To fail a session when the ORA-03114 error is encountered change the 03114 line in the file to the following:
03114, F
To return a row error when the ORA-02292 error is encountered change the
02292 line to the following:
02292, R
Note that the Oracle action file only applies to native Oracle connections in the session. If the target is using the SQL*Loader external loader option, the message status will not be modified by the settings in this file.
C. Once the file is modified, following changes need to be done in the server level.