AZURE DATA FACTORY INTERVIEW QUESTIONS:11-20

 11.WHY WE USE AZURE DATA FACTORY?

   *) ADF can take these challenges faced while moving data to (or) from the cloud

     by the following methods:

   a) job scheduling and orchestration ---------- ( job scheduling by using auto triggering methods)

    b) security                         ---------- ( data transformation security between on premises and cloud)

    c) continous intigration and delivery--------- ( ADF intigration with 'GIT HUB' allows you to                      develop, built and deploy )

    d) scalability                      ---------- ( it can capable of handling large volume of data)

12.WHAT ARE THE STEPS FOR CREATING ETL PROCESS IN AZURE DATA FACTORY?

       1.Creating a Linked service for source data store  for example sql server dataset.

       2.Create a Linked service for destination data store for example Azure  data lake storage.

       3.Create a Pipeline and add COPYDATA activity.

       4.Schedule the pipeline by adding a Trigger.

13.WHAT ARE THE DEPLOYMENT ENVIRONMENTS OFFERED BY AZURE ?

       Two Environments:-

        1.STAGING ENVIRONMENT.

                 It provides the platform to validate changes to your application before it can made live in the              production environment.

       2.PRODUCTION ENVIRONMENT.

                 This environment is used to store the live applications.

14. WHAT IS THE RELATION BETWEEN TRIGGER AND PIPELINE?

         One Trigger can be applied to  multiple pipelines and multiple Triggers can be used on single                pipeline.

15.DIFFERENCE BETWEEN INCREMENTAL LOAD AND FULL LOAD?

          INCREMENTAL LOAD:-

                      An Incremental load is an approach  where only the new or changed data is loaded                        from  the source system to the destination system.

          FULL LOAD:-

                     A Full load is an approach where all the data from the source system is loaded into the destination system.

16. WHAT ARE THE FILE FORMATS THAT SUPPORTED BY AZURE DATA FACTORY?

      Azure Data Factory supports the following file formats. 

        1) Avro format

        2) Binary format

       3) Delimited text format

       4) Excel format

       5) JSON format

       6) ORC format

       7) Parquet format

       8) XML format

17. HOW TO RUN YOOUR PIPELINE DAILY?

        my pipeline run every day on a scheduled basis like scheduled triggering mode.

18.HOW TO SEE PIPELINE ERRORS?

         1.Monitor and Manage tab.

         2.Error logs.

         3.Logic Apps.

         4.Rest API's

19.COMPLEXITIES FACED WHILE MIGRATING DATA FROM SOURCE TO DESTINATION?

      1.By checking weather all the files received or not.

      2.Checking all the files having in same format.

      3.Data Type and Data Size issues.

      4.Checking column names and number of columns in each file.

      5.Comparing all the files with the summary file.

      6.Delimiter issues.

20.IN ADF ,WHERE THE PASSWORDS ARE STORED?

       Azure key vault.


         


                






Comments

Popular posts from this blog

SQL Interview Questions:3

AZURE DATA FACTORY INTERVIEW QUESTIONS: FILES FORMATS

AZURE DATA FACTORY INTERVIEW QUESTIONS :ACTIVITIES AND TYPES.