Q1. Differentiate between Informatica and DataStage.
Q2. What is Informatica PowerCenter ?
Q3. Mention some typical use cases of Informatica.
Q4. How can we filter rows in Informatica ?
Q5. Differentiate between Joiner and Lookup transformations.
Q6. In Informatica Workflow Manager, how many repositories can be created ?
Q7. What are the types of search transformation ?
Q8. How serve pre-session and post-session plate commands function ?
Q9. What can we do to improve the performance of Informatica Aggregator Transformation ?
Q10. How can we update a phonograph record in the target table without using Update Strategy ?
The Informatica Interview Questions web log is largely divided into the come categories :
Watch this Informatica Tutorial video:
top Informatica Interview Questions and Answers top Informatica Interview Questions and Answers
Basic Interview Questions
1. Differentiate between Informatica and DataStage.
|GUI for development and monitoring||PowerDesigner, Repository Manager, Workflow Designer, and Workflow Manager||DataStage Designer, Job Sequence Designer, and Director|
|Data integration solution||Step-by-step solution||Project-based integration solution|
2. What is Informatica PowerCenter?
Informatica PowerCenter is an ETL/data integration tool that has a wide image of applications. This cock allows users to connect and fetch data from different heterogeneous sources and subsequently process the same .
For example, users can connect to a SQL Server Database or an Oracle Database, or both, and besides integrate the data from both these databases to a third arrangement .
Learn more about Business Objects vs Informatica in this insightful blog!
3. Mention some typical use cases of Informatica.
There are many distinctive use cases of Informatica, but this tool is predominantly leveraged in the follow scenarios :
- When organizations migrate from the existing legacy systems to new database systems
- When enterprises set up their data warehouse
- While integrating data from various heterogeneous systems including multiple databases and file-based systems
- For data cleansing
4. How can we filter rows in Informatica?
Using Informatics Transformation there are two ways to filter rows, they are as follows :
- Source Qualifier Transformation: It filters rows while reading data from a relational data source. It minimizes the number of rows while mapping to enhance performance. Also, Standard SQL is used by the filter condition for executing in the database.
- Filter Transformation: It filters rows within mapped data from any source. It is added close to the source to filter out the unwanted data and maximize performance. It generates true or false values based on conditions.
Go through the Informatica Course in London to get a clear understanding of Informatica!
5. Differentiate between Joiner and Lookup transformations.
|It is not possible to override the query||It is possible to override the query|
|Only the ‘=’ operator is available||All operators are available for use|
|Users cannot restrict the number of rows while reading relational tables||Users can restrict the number of rows while reading relational tables|
|It is possible to join tables with Joins||It behaves as Left Outer Join while connecting with the database|
Get 100 % hike ! chief Most in Demand Skills now !
6. In Informatica Workflow Manager, how many repositories can be created?
Depending upon the issue of ports that are required, repositories can be created. In general, however, there can be any number of repositories .
7. What are the types of lookup transformation?
There are four different types of search transformation :
- Relational or flat-file lookup: It performs a lookup on relational tables.
- Pipeline lookup: It performs a lookup on application sources.
- Connected or unconnected lookup: While the connected lookup transformation receives data from the source, performs a lookup, and returns the result to the pipeline, the unconnected lookup happens when the source is not connected. It returns one column to the calling transformation.
- Cached or uncached lookup: Lookup transformation can be configured to cache lookup data, or we can directly query the lookup source every time a lookup is invoked.
Learn more about Informatica in this Informatica Training in New York to get ahead in your career!
8. How do pre- and post-session shell commands function?
A command tax can be called a pre-session or post-session shell control for a seance tax. Users can run it as a pre-session command, a post-session achiever command, or a post-session failure dominate. Based on use cases, the application of shell commands can be changed or altered .
9. What can we do to improve the performance of Informatica Aggregator Transformation?
Aggregator performance improves dramatically if records are sorted before passing to the collector and if the ‘ sorted input ’ option under Aggregator Properties is checked. The criminal record arrange should be sorted on those columns that are used in the Group By operation. It is often a good mind to sort the record set in the database horizontal surface, for example, inside a source qualifier transformation, unless there is a opportunity that the already sorted records from the reference modifier can again become uncategorized before reaching the collector .
10. How can we update a record in the target table without using Update Strategy?
A target table can be updated without using ‘ Update Strategy. ’ For this, we need to define the key in the target mesa at the Informatica level, and then we need to connect the winder and the field we want to update in the mapping aim. At the school term flat, we should set the target property as ‘ Update as Update ’ and check the ‘ Update ’ check box .
Let us assume, we have a target mesa ‘ Customer ’ with fields as ‘ Customer ID, ’ ‘ Customer Name, ’ and ‘ Customer Address. ’ Suppose if we want to update ‘ Customer Address ’ without an Update Strategy, then we have to define ‘ Customer ID ’ as the primary key at the Informatica level, and we will have to connect ‘ Customer ID ’ and ‘ Customer Address ’ fields in the map. If the session properties are set correctly as described above, then the map will alone update the ‘ Customer Address ’ airfield for all matching customer IDs .
Watch this Informatica Tutorial video:
acme Informatica Interview Questions and Answers top Informatica Interview Questions and Answers
11. Why do we use mapping parameters and mapping variables?
basically, mapping parameters and mapping variables represent values in mappings and mapplets .
- Mapping parameters represent constant values that are defined before running a session.
- After creation, parameters appear in Expression Editor.
- These parameters can be used in source qualifier filters, user-defined joins, or for overriding.
- As opposed to mapping parameters, mapping variables can change values during sessions.
- The last value of a mapping variable is saved to the repository at the end of each successful session by the Integration Service. However, it is possible to override saved values with parameter files.
- Basically, mapping variables are used to perform incremental reads of data sources.
12. Define the surrogate key.
A foster key is basically an identifier that uniquely identifies model entities or objects in a database. not being derived from any other data in the database, surrogate keys may or may not be used as basal keys .
It is basically a unique consecutive act. If an entity exists in the outside world and is modeled within the database, or represents an object within the database, it is denoted by a foster key. In these cases, surrogate keys for specific objects or modeled entities are generated internally .
13. Explain sessions and also shed light on how batches are used to combine executions.
A session is nothing but a education set which is ought to be implemented to convert data from a reservoir to a target. To carry out sessions, users need to leverage the school term ’ mho coach or use the pmcmd command. For combining sessions, in either a serial or a parallel manner, batch execution is used. Any phone number of sessions can be grouped into batches for migration .
14. What is incremental aggregation?
basically, incremental collection is the process of capturing changes in the generator and calculating aggregations in a school term. This summons incrementally makes the integration service update targets and avoids the procedure of calculating aggregations on the entire source .
Upon the beginning lode, the table becomes as below :
On the next burden, the data will be aggregated with the next seance date .
15. How can we delete duplicate rows from flat files?
We can delete extra rows from bland files by leveraging the sorter transformation and selecting the clear-cut choice. Selecting this option will delete the duplicate rows .
16. What are the features of Informatica Developer 9.1.0?
From an Informatica Developer ’ s perspective, some of the new features in Informatica Developer 9.1.0 are as follows :
- In the new version, lookup can be configured as an active transformation—it can return multiple rows on a successful match.
- Now, we can write SQL override on uncached lookup also. Previously, we could do it only on cached lookup.
- Control over the size of our session log: In a real-time environment, we can control the session log file size or log file time.
- Database deadlock resilience feature: This will ensure that our session does not immediately fail if it encounters any database deadlock. It will retry the operation. We can configure the number of retry attempts.
17. What are the advantages of using Informatica as an ETL tool over Teradata?
First up, Informatica is a data integration tool, while Teradata is an MPP database with some script and fast data movement capabilities .
Advantages of Informatica over Teradata:
- It functions as a metadata repository for the organization’s ETL ecosystem. Informatica jobs (sessions) can be arranged logically into worklets and workflows in folders. It leads to an ecosystem that is easier to maintain and quicker for architects and analysts to analyze and enhance.
- Job monitoring and recovery: It is easy-to-monitor jobs using Informatica Workflow Monitor. It is also easier to identify and recover in the case of failed jobs or slow-running jobs. It exhibits the ability to restart from the failure row step.
- Informatica Market Place: It is a one-stop shop for lots of tools and accelerators to make SDLC faster and improve application support.
- It enables plenty of developers in the market with varying skill levels and expertise to interact.
- Lots of connectors to various databases are available, including support for Teradata MLoad, TPump, FastLoad, and Parallel Transporter in addition to the regular (and slow) ODBC drivers.
- Surrogate key generation through shared sequence generators inside Informatica could be faster than generating them inside the database.
- If a company decides to move away from Teradata to another solution, then vendors like Infosys can execute migration projects to move the data and change the ETL code to work with the new database quickly, accurately, and efficiently using automated solutions.
- Pushdown optimization can be used to process the data in the database.
- It has an ability to code ETL such that processing load is balanced between the ETL server and the database box—this is useful if the database box is aging and/or in case the ETL server has a fast disk/large enough memory and CPU to outperform the database in certain tasks.
- It has the ability to publish processes as web services.
Advantages of Teradata over Informatica:
- Cheaper (initially): No initial ETL tool license costs. There are only fewer OPEX costs as one doesn’t need to pay for yearly support from Informatica Corp.
- Great choice if all the data to be loaded are available as structured files—which can then be processed inside the database after an initial stage load.
- Good choice for a lower complexity ecosystem.
- Only Teradata developers or resources with good ANSI/Teradata SQL/BTEQ knowledge are required to build and enhance the system.
Courses you may like
18. Differentiate between various types of schemas in data warehousing.
Star schema is the childlike style of data mart outline in computing. It is an approach that is most wide used to develop data warehouses and dimensional data marts. It features one or more fact tables referencing numerous proportion tables .
A logical placement of tables in a multidimensional database, the snowflake schema is represented by centralize fact tables which are connected to multidimensional tables. dimensional tables in a ace schema are normalized using snowflaking. once normalized, the result structure resembles a snow bunting with the fact board in the middle. Low-cardinality attributes are removed, and separate tables are formed .
Fact Constellation Schema
Fact configuration outline is a measure of on-line analytic process ( OLAP ), and OLAP happens to be a collection of multiple fact tables sharing proportion tables and viewed as a collection of stars. It can be seen as an extension of the star schema .
following up on this Informatica consultation questions for freshers, we need to take a look at OLAP and its types. Read on .
19. Define OLAP. What are the different types of OLAP?
OLAP or Online Analytical Processing is a specific class of software that allows users to analyze information from multiple database systems simultaneously. Using OLAP, analysts can extract and have a look at occupation data from different sources or points of watch .
Types of OLAP :
- ROLAP: ROLAP or Relational OLAP is an OLAP server that maps multidimensional operations to standard relational operations.
- MOLAP: MOLAP or Multidimensional OLAP uses array-based multidimensional storage engines for multidimensional views on data. Numerous MOLAP servers use two levels of data storage representation to handle dense and sparse datasets.
- HOLAP: HOLAP of Hybrid OLAP combines both ROLAP and MOLAP for faster computation and higher scalability of data.
20. What is target load order? How to set it?
When a mapplet is used in map, Designer allows users to set target load order for all sources that pertain to the mapplet. In Designer, users can set the aim load order in which Integration Service sends rows to targets within the map. A target load ordain group is basically a collection of source qualifiers, transformations, and targets linked together in a map. The aim load order can be set to maintain referential integrity while operating on tables that have primary and secondary coil keys .
Steps to Set the Target Load Order
Step 1: Create a function that contains multiple aim load order groups
Step 2: Click on Mappings and then select Target Load Plan
Step 3: The Target Load Plan dialog box lists all Source Qualifier transformations with targets that receive data from them
Step 4: Select a Source Qualifier and click on the Up and Down buttons to change the position of it
Step 5: Repeat Steps 3 and 4 for other Source Qualifiers if you want to reorder them
Step 6: Click on OK after you are done
Intermediate Interview Questions
21. Define Target Designer.
If we are required to perform ETL operations, we need source data, target tables, and the want transformations. target Designer in Informatica allows us to create aim tables and modify the preexistent prey definitions .
target definitions can be imported from assorted sources, including flat files, relational databases, XML definitions, Excel worksheets, etc .
For opening Target Designer, click on the Tools menu and select the Target Designer option .
22. What are the advantages of Informatica?
The following are the advantages of Informatica:
- It is a graphical user interface tool : code in any graphic instrument is generally faster than hand-code script .
- It can communicate with all known data sources ( mainframe/RDBMS/Flat Files/XML/VSM/SAP, etc ) .
- It can handle very big data very efficaciously .
- The exploiter can apply mappings, excerpt rules, cleansing rules, transformation rules, and collection logic and load rules are in discriminate objects in an ETL tool. Any variety in any of the objects will give a minimum impact on other objects .
- The aim is reclaimable ( Transformation Rules ) .
- Informatica has unlike ‘ adapters ’ for extracting data from packaged ERP applications ( such as SAP or PeopleSoft ) .
- Resources are available in the market .
- It can be run on Windows and Unix environments .
- Monitoring jobs becomes easy with it, and thus do recovering failed jobs and pointing out slow jobs .
- It has many robust features including database information, datum validation, migration of projects from one database to another, etc .
23. List some of the PowerCenter client applications with their basic purpose.
- Repository Manager: An administrative tool that is used to manage repository folders, objects, groups, etc.
- Administration Console: Used to perform service tasks
- PowerCenter Designer: Contains several designing tools including source analyzer, target designer, mapplet designer, mapping manager, etc.
- Workflow Manager: Defines a set of instructions that are required to execute mappings
- Workflow Monitor: Monitors workflows and tasks
Interested in learning Informatica? Check out this Informatica Training in Sydney !
24. What are sessions? List down their properties.
available in the Workflow Manager, sessions are configured by creating a school term task. Within a map program, there can be multiple sessions that can be either reclaimable or non-reusable .
Properties of Sessions
- Session tasks can run concurrently or sequentially, as per the requirement.
- They can be configured to analyze performance.
- Sessions include log files, test load, error handling, commit interval, target properties, etc.
25. What are the various types of transformations possible in Informatica?
The respective types of transformations are :
- Aggregator Transformation
- Expression Transformation
- Normalizer Transformation
- Rank Transformation
- Filter Transformation
- Joiner Transformation
- Lookup Transformation
- Stored procedure Transformation
- Sorter Transformation
- Update Strategy Transformation
- XML Source Qualifier Transformation
- Router Transformation
- Sequence Generator Transformation
26. What are the features of connected lookup?
The features of connected search are as follows :
- It takes in the input directly from the pipeline.
- It actively participates in the data flow, and both dynamic and static cache is used.
- It caches all lookup columns and returns default values as the output when the lookup condition does not match.
- It is possible to return more than one column value to the output port.
- It supports user-defined default values.
27. Define junk dimensions.
Read more: Top 7 the who interview questions in 2022
Junk dimensions are structures that dwell of a group of a few debris attributes such as random codes or flags. They form a framework to store associate codes with deference to a specific dimension at a one put, alternatively of creating multiple tables for the same .
Become a master of Informatica by going through this online Informatica Course in Toronto!
28. What is the use of Rank Transformation?
Be it active or connected, rank transformation is used to sort and rank a specify of records either from the exceed or from the bottom. It is besides used to select data with the largest or smallest numeral value based on specific ports .
29. Define Sequence Generator transformation.
available in both passive and affiliated configurations, the Sequence Generator transformation is responsible for the generation of primary keys or a sequence of numbers for calculations or processing. It has two output ports that can be connected to numerous transformations within a mapplet. These ports are :
- NEXTVAL: This can be connected to multiple transformations for generating a unique value for each row or transformation.
- CURRVAL: This port is connected when NEXTVAL is already connected to some other transformation within the mapplet.
30. What is the purpose of the INITCAP function?
When invoked, the INITCAP officiate capitalizes the beginning character of each password in a string and converts all other characters to lowercase .
31. Define enterprise data warehousing?
When the datum of an organization is developed at a individual decimal point of access, it is known as enterprise data repositing .
Learn more about Informatica in this Informatica Powercenter Architecture Tutorial !
32. Differentiate between a database and a data warehouse?
The database has a group of useful information that is brief in size as compared to the data warehouse. In the datum warehouse, there are sets of every kind of data whether it is useful or not, and the datum is extracted as per the prerequisite of the customer .
33. What do you understand by the term ‘domain’?
The term ‘ world ’ refers to all interlinked relationship and nodes that are undertaken by lone organizational point .
34. Differentiate between a repository server and a powerhouse.
A repository waiter chiefly guarantees depository dependability and uniformity, while a powerhouse waiter tackles the execution of many procedures between the factors of the server ’ sulfur database repository .
Learn Complete Informatica Course at Hyderabad in 42 hours!
35. How can we create indexes after completing the load process?
With the help oneself of the command task at the session charge, we can create indexes after the cargo operation .
36. Define sessions in Informatica ETL.
A session is a teaching group that requires the transformation of data from the source to a aim .
37. How many number of sessions can we have in one group?
We can have any number of sessions, but it is advisable to have a lesser number of sessions in a batch because it will become easier for migration .
Are you interested in learning Informatica? Enroll in our Informatica Course in Bangalore!
38. Differentiate between a mapping parameter and a mapping variable.
The values that alter during the session ’ second implementation are known as map variables, whereas the values that don ’ metric ton change during the session ’ sulfur implementation are known as map parameters .
39. Mention the advantages of partitioning a session.
The independent advantage of partitioning a school term is to make the waiter ’ mho process and competence better. Another advantage is that it implements the solo sequences within the school term .
40. What are the features of complex mapping?
The features of complex map are as follows :
- There are more numbers of transformations
- It uses complex business logic
41. How can we identify whether a mapping is correct or not without a connecting session?
With the avail of the debug option, we can identify whether a mapping is correct or not without connecting sessions .
42. Can we use mapping parameters or variables, developed in one mapping, into any other reusable transformation?
Yes, we can use map parameters or variables into any early reclaimable transformation because they doesn ’ t have any mapplet .
43. What is the use of the aggregator cache file?
If extra memory is needed, collector provides extra hoard files for keeping the transformation values. It besides keeps the transitional value that are there in the local buffer memory .
44. What is lookup transformation?
The transformation that has entrance right field to RDBMS is known as search transformation .
45. What do you understand by the term ‘role-playing dimension’?
The dimensions that are used for playing diversify roles while remaining in the lapp database domain are known as role-playing dimensions .
Advanced Interview Questions
46. How can we access repository reports without SQL or other transformations?
We can access repository reports by using a metadata reporter. There is no necessitate of using SQL or other transformation as it is a world wide web app .
47. Mention the types of metadata that are stored in repository.
The types of metadata, which are stored in the depository, are Target definition, Source definition, Mapplet, Mappings, and Transformations .
48. What is code page compatibility?
transfer of data takes seat from one code page to another such that both code pages have the like character sets ; then, data bankruptcy will not occur .
49. How can we confirm all mappings in the repository simultaneously?
At a prison term, we can validate only one function. Hence, mapping can not be validated simultaneously .
50. Define Aggregator transformation.
It is unlike from saying transformation in which we can do calculations in the fructify, but in collector transformation, we can do aggregate calculations such as averages, sum, etc .
Check out our blog on How to Prepare for Informatica PowerCenter Certification Exams?
51. What is Expression transformation?
It is used for performing nonaggregated calculations. We can test conditional statements before the output signal results are moved to the target tables .
52. Define Filter transformation.
Filter transformation is a means of filtering rows in a map. It has all ports of input/output, and the quarrel which matches with that condition can only pass by that percolate .
53. Define Joiner transformation.
It combines two associated mix sources located in different locations, while a reservoir qualifier transformation can combine data rising from a park source .
54. What do you mean by Lookup transformation?
Lookup transformation is used for maintaining data in a relational table through map. We can use multiple search transformations in a mapping .
55. How can we use Union transformation?
It is a different input group transformation that is used to combine data from different sources .
56. Define incremental aggregation.
The incremental collection is done whenever a school term is developed for a mapping aggregate .
57. Differentiate between a connected lookup and an unconnected lookup.
In a connect search, inputs are taken straight away from assorted transformations in the grapevine. While, an confused search doesn ’ t take inputs straight away from diverse transformations ; it can be used in any transformations and can be raised as a function using LKP formulation .
58. Define mapplet.
A mapplet is a reclaimable object that uses a mapplet interior designer .
59. What is reusable transformation?
This transformation is used versatile times in function. It is different from early mappings which use the transformation as it is stored as metadata .
60. Define update strategy.
Whenever a rowing has to be updated or inserted based on some succession, then an update strategy is used. But in this, conditions should be specified before for the work quarrel to be ticked as Update or Insert .
61. Explain the scenario which compels Informatica server to reject files
When it faces DD_Reject in update strategy transformation, then it sends the server to reject files .
62. What is surrogate key?
It is a ersatz for the natural flower key. It is a unique identification for each course in the table .
63. Mention the prerequisite tasks to achieve the session partition.
In order to perform seance partition, one needs to configure the school term to partition generator data and then install the Informatica server machine in multifold CPUs .
Want to know about the Installation of Informatica Power Center!
64. In Informatics’ server, which files are created during the session RUMs?
The following types of files are created during the session RUMs:
- Errors log
- Bad file
- Workflow low
- Session log
65. Define a session task.
It is a mass of teaching that guides the PowerCenter server about how and when to move data from sources to targets .
66. Define the command task.
This task permits one or more shell commands in UNIX or DOS in Windows to run during the work flow .
67. Explain standalone command task.
This tax can be used anywhere in the work flow to run the shell commands .
68. What is a predefined event?
A predefined event is a file-watch event. It waits for a specific file to arrive at a specific location .
69. What is a user-defined event?
User-defined events are a hang of tasks in the work flow. Events can be developed and then raised as per requirement .
70. Define workflow.
The group of directions that communicates with the server about how to implement tasks is known as a work flow .
71. Mention the different tools used in Workflow Manager?
The different tools used in Workflow Manager are :
- Task Developer
- Task Designer
- Workflow Designer
72. Name the other tools used for scheduling purpose other than Workflow Manager and pmcmd.
‘ CONTROL M ’ is a third-party tool used for scheduling purposes .
73. Define OLAP (Online Analytical Processing).
It is a action by which multi-dimensional analysis occurs .
74. Name the different types of OLAP.
Different types of OLAP are ROLAP, HOLAP, and DOLAP .
Check out How Upskilling in Informatica Helped me to Get Back into the Workforce: Subhrosmita’s Journey!
75. Define worklet.
Worklet is said when the work flow tasks are collected in a group. It includes a timer, decision, command, event wait, etc .
76. Mention the use of a Target Designer.
With the avail of a Target Designer, we can create a target definition .
77. From where can we find the throughput option in Informatica?
In Workflow Monitor, we can find the throughput option. By right-clicking on the seance, then pressing on catch run properties, and, under source/target statistics, we can find this choice .
78. Define target load order.
It is specified on the criteria of reservoir qualifiers in a map. If there are many source qualifiers attached to assorted targets, then we can entitle an regulate in which Informatica loads data in targets .
79. Define Informatica.
Informatica is a creature, supporting all the steps of the Extraction, Transformation, and Load ( ETL ) process. Nowadays, Informatica is besides being used as an integration cock. Informatica is an easy-to-use cock. It has got a simple ocular interface like forms in ocular basic. You good need to drag and drop different objects ( known as transformations ) and design process flow for data extraction, transformation, and load .
These process flow diagrams are known as mappings. once a function is made, it can be scheduled to run a and when required. In the background, the Informatica server takes care of fetching data from the source, transforming it, and loading it to the prey .
Check out our blog if you want to know about Informatica Business components!
80. What are the different lookup cache(s)?
Informatica Lookups can be cached or uncached ( no hoard ). A hoard search can be either inactive or dynamic. A static hoard is one that does not modify the cache once it is built, and it remains the same during the school term political campaign. On the other hand, a cache is refreshed during the school term run by inserting or updating the records in the cache based on the incoming source data .
By default, Informatica hoard is a static cache. A search cache can besides be divided as dogged or non-persistent based on whether Informatica retains the hoard even after the completion of the session run or deletes it .
81. How can we update a record in target table without using Update Strategy?
A target mesa can be updated without using ‘ Update Strategy. ’ For this, we need to define the key in the target mesa at the Informatica level, and then we need to connect the cardinal and the plain we want to update in the map aim. At the session level, we should set the target property as ‘ Update as Update ’ and check the ‘ Update ’ check-box. Let ’ s wear, we have a target table ‘ Customer ’ with fields as ‘ Customer ID, ’ ‘ Customer Name, ’ and ‘ Customer Address. ’
If we want to update ‘ Customer Address ’ without an Update Strategy, then we have to define ‘ Customer ID ’ as the elementary key at the Informatica tied and connect Customer ID and Customer Address fields in the map. If the school term properties are set correctly as described above, then the map will only update the customer savoir-faire plain for all matching customer IDs .
82. What are the new features of Informatica 9.x Developer?
From an Informatica Developer’s perspective, some of the new features in Informatica 9.x are as follows:
- Lookup can be configured as an active transformation—it can return multiple rows on a successful match.
- You can write SQL override on uncached lookups also. Previously, you could do it only on cached lookups.
- You can control the size of the session log. In a real-time environment, you can control the session log file size or time.
- Database deadlock resilience feature—this will ensure that the session does not immediately fail if it encounters any database deadlock; it will now retry the operation again. You can configure the number of retry attempts.
83. What is Informatica ETL Tool?
Informatica ETL tool is the marketplace leader in data consolidation and datum quality services. Informatica is a successful ETL and EAI tool with significant industry coverage. ETL refers to extracting, transforming, and loading. Data consolidation tools are different from other software platforms and languages .
They have no built-in feature to build a drug user interface where end-user can see the transformed data. Informatica ETL cock “ ability center ” has the capability to manage, integrate, and migrate enterprise data .
84. What is the need for an ETL tool?
The trouble comes with traditional program languages where we need to connect to multiple sources and then manage errors. For this, we have to write complex code. ETL tools provide a ready-made solution for this. We don ’ t need to worry about handling these things, and hence we can concentrate on coding the needed separate .
Read more: Top 6 question ask at an interview in 2022