TOP [25+] SQL Query Interview Questions & Answers | Learn NOW
SQL Query Interview Questions and Answers

TOP [25+] SQL Query Interview Questions & Answers | Learn NOW

Last updated on 04th Jul 2020, Blog, Interview Questions

About author

Sanjay (Sr Data Base Manager )

High level Domain Expert in TOP MNCs with 8+ Years of Experience. Also, Handled Around 20+ Projects and Shared his Knowledge by Writing these Blogs for us.

(5.0) | 15212 Ratings 642

These SQL Query Interview Questions have been designed specially to get you acquainted with the nature of questions you may encounter during your interview for the subject of SQL Query . As per my experience good interviewers hardly plan to ask any particular question during your interview, normally questions start with some basic concept of the subject and later they continue based on further discussion and what you answer.we are going to cover top 100 SQL Query  Interview questions along with their detailed answers. We will be covering SQL Query  scenario based interview questions, SQL Query  interview questions for freshers as well as SQL Query  interview questions and answers for experienced. 

1) What systems can SQL extract and load data into?

Ans:

ODI brings true heterogeneous connectivity out-of-the-box, it can connect natively to Oracle, Sybase, MS SQL Server, MySQL, LDAP, DB2, PostgreSQL, Netezza.

It can also connect to any data source supporting JDBC, its possible even to use the Oracle BI Server as a data source using the jdbc driver that ships with BI Publisher

2) What are Knowledge Modules?

Ans:

Knowledge Modules form the basis of ‘plug-ins’ that allow ODI to generate the relevant execution code , across technologies , to perform tasks in one of six areas, the six types of knowledge module consist of:

  • Reverse-engineering knowledge modules are used for reading the table and other object metadata from source databases
  • Journalizing knowledge modules record the new and changed data within either a single table or view or a consistent set of tables or views
  • Loading knowledge modules are used for efficient extraction of data from source databases for loading into a staging area (database-specific bulk unload utilities can be used where available)

3)Check knowledge modules are used for detecting errors in source data

Ans:

  • Integration knowledge modules are used for efficiently transforming data from staging area to the target tables, generating the optimized native SQL for the given database
  • Service knowledge modules provide the ability to expose data as Web service.

4) Does my SQL infrastructure require an Oracle database?

Ans:

No, the ODI modular repositories (Master + and one of multiple Work repositories) can be installed on any database engine that supports ANSI ISO 89 syntax such as Oracle, Microsoft SQL Server, Sybase AS Enterprise, IBM DB2 UDB, IBM DB2/40.

5) Does ODI support web services?

Ans:

Yes, ODI is ‘SOA’ enabled and its web services can be used in 3 ways:

  • The Oracle Data Integrator Public Web Service, that lets you execute a scenario (a published package) from a web service call
  • Data Services, which provide a web service over an ODI data store (i.e. a table, view or other data source registered in ODI)
  • The ODI Invoke Web Service tool that you can add to a package to request a response from a web service.

6) what is the ODI Console?

Ans:

  • ODI console is a web based navigator to access the Designer, Operator and Topology components through browser. 

suppose I having 6 interfaces and running the interface 3 rd one failed how to run remaining interfaces?

  • If you are running Sequential load it will stop the other interfaces. so goto operator and right click on filed interface and click on restart. If you are running all the interfaces are parallel only one interface will fail and other interfaces will finish. 

7) what is load plans and types of load plans?

Ans:

Load plan is a process to run or execute multiple scenarios as a Sequential or parallel or conditional based execution of your scenarios. And same we can call three types of load plans, Sequential, parallel and Condition based load plans. 

8) what is profile in SQL?

Ans:

Profile is a set of objective wise privileges. we can assign this profiles to the users. Users will get the privileges from profile

9) How to write the sub-queries in SQL?

Ans:

Using Yellow interface and sub queries option we can create sub queries in SQL. or Using  VIEW we can go for sub queries Or Using SQL Procedure we can call direct database queries in SQL.

10) How to remove the duplicate in SQL?

Ans:

Use DISTINCT in IKM level. it will remove the duplicate rows while loading into target. 

11) Suppose having unique and duplicate but i want to load unique record one table and duplicate one table?

Ans:

Create two interfaces or once procedure and use two queries one for Unique values and one for duplicate values. 

12) How to implement data validations?

Ans:

Use Filters & Mapping Area AND Data Quality related to constraints use CKM Flow control.

13) How to handle exceptions?

Ans:

Exceptions In packages advanced tab and load plan exception tab we can handle exceptions.

14) In the package one interface got failed how to know which interface got failed if we no access to operator?

Ans:

Make it mail alert or check into SNP_ SESS_ LOG tables for session log details.

15) How to implement the logic in procedures if the source side data deleted that will reflect the target side table?

Ans:

  • User this query on Command on target Delete from Target_table where not exists (Select ‘X’ From Source_table Where Source_table.ID=Target_table.ID).
  • If the Source have total 15 records with 2 records are updated and 3 records are newly inserted at the target side we have to load the newly changed and inserted records
  • Use IKM Incremental Update Knowledge Module for Both Insert n Update operations.

16) Can we implement package in package?

Ans:

Yes, we can call one package into other package.

17) How to load the data with one flat file and one RDBMS table using joins?

Ans:

  • Drag and drop both File and table into source area and join as in Staging area.
  • If the source and target are oracle technology tell me the process to achieve this requirement(interfaces, KMS, Models)
  • Use LKM-SQL to SQL or LKM-SQL to Oracle , IKM Oracle Incremental update or Control append.

18) what we specify the in XML data server and parameters for to connect to xml file?

Ans:

File name with location :F and Schema :S this two parameters

19) How to reverse engineer views(how to load the data from views)?

Ans:

  • In Models Go to Reverse engineering tab and select Reverse engineering object as

VIEW

  • ODI brings true heterogeneous connectivity out-of-the-box, it can connect natively to Oracle, Sybase, MS SQL Server, MySQL, LDAP, DB2, PostgreSQL, Netezza.
  • It can also connect to any data source supporting JDBC, its possible even to use the Oracle BI Server as a data source using the jdbc driver that ships with BI Publisher.

20) What is BEx Map in SAP BI?

Ans:

BEx Map is BWs Geographical Information System (GIS). BEx Map is one of the characteristics for SAP BI, and it gives the geographical information like customer, customer sales region and country.

    Subscribe For Free Demo

    21) What is the t-code to see log of transport connection?

    Ans:

    In RSA1 ->Transport Connection you can collect the Queries and the Role and after this you can transport them (enabling the transport in SE10, import it in STMS

    RSA1

    • Transport connection (button on the left bar menu)
    • Sap transport ->Object Types (button on the left bar menu)
    • Find Query Elements ->Query
    • Find your query
    • Group necessary object
    • Transport Object (car icon)
    • Release transport (SE10 T-code)
    • Load transport (STMS T-code)
    • Lo; mm inventory data source with marker significance?
    • Marker is as like check point when u upload the data from inventory data source
    • 2lis_03_bx data source for current stock and BF for movement type
    • after uploading data from BX u should relies the request in cube or imenn to say compress it then load data from another data source BF and set this updated data to no marker update so marker is use as a check point if u don’t do this u getting data mismatch at bex level bcz system get confuse.
    • (2LIS_03_BF Goods Movement from Inventory Management– —–Uncheck the no marker update tab)
    • (2LIS_03_BX Stock Initialization for Inventory Management– —select the no marker update check box)
    • 2LIS_03_UM Revaluations —-Uncheck the no marker update tab) in the info packege of collapse[sociallocker]

    22) How can you navigate to see the error idocs?

    Ans:

    • If it is fine check the IDOCs in source system go to BD87->give Ur user ID and date->execute->you can find Red status Idocs select the erroneous Idoc->Rt.click and select Manual process.
    • You need to reprocess this IDOC which are RED. For this you can take help of Any of your Team (ALE IDOC Team or Basis Team) Or Else
    • you can push it manually. Just search it in bd87 screen only to reprocess.
    • Also, try to find why this Idocs are stuck there.

    23) Difference between v1, v2, v3 jobs in extraction?

    Ans:

    • V1 Update: whenever we create a transaction in R/3(e.g.Sales Order) then the entries get into the R/3 Tables (VBAK, VBAP..) and this takes place in V1 Update.
    • V2 Update: V2 Update starts a few seconds after V1 Update and in this update the values get into Statistical Tables, from where we do the extraction into BW.
    • V3 Update: Its purely for BW extraction.

    24) What are statistical update and document update?

    Ans:

    • Synchronous Updating (V1 Update)
    • The statistics update is made synchronously with the document update.
    • While updating, if problems that result in the termination of the
    • Statistics update occur, the original documents are NOT saved. The cause
    • Of the termination should be investigated and the problem solved.
    • Subsequently, the documents can be entered again. Radio button: V2 updating.

    25)Do you have any idea how to improve the performance of the BW?

    Ans:

    • Asynchronous Updating (V2 Update)
    • With this update type, the document update is made separately from the statistics update. A termination of the statistics update has NO influence on the document update (see V1 Update).
    • Radio button: Updating in U3 update program
    • Asynchronous Updating (V3 Update)
    • With this update type, updating is made separately from the document update. The difference between this update type and the V2 Update lies, however, with the time schedule. If the V3 update is active, then the update can be executed at a later time.
    • In contrast to V1 and V2 Updates, no single documents are updated. The V3 update is, therefore, also described as a collective update.

    26) How can you decide the query performance is slow or fast?

    Ans:

    • You can check that in RSRT tcode.
    • execute the query in RSRT and after that follow the below steps
    • Go to SE16 and in the resulting screen give table name as RSDDSTAT for BW 3.x and RSDDSTAT_DM for BI 7.0 and press enter you can view all the details about the query like time taken to execute the query and the timestamps.  

    27) What is statistical setup and what is the need and why?

    Ans:

    Follow these steps to filling the set up table.

    • Go to transaction code RSA3 and see if any data is available related to your DataSource. If data is there in RSA3 then go to transaction code LBWG (Delete Setup data) and delete the data by entering the application name.
    • Go to transaction SBIW –>Settings for Application Specific Datasource –>Logistics –>Managing extract structures –>Initialization –>Filling the Setup table –>Application specific setup of statistical data –>perform setup (relevant application)
    • In OLI*** (for example OLI7BW for Statistical setup for old documents: Orders) give the name of the run and execute. Now all the available records from R/3 will be loaded to setup tables.
    • Go to transaction RSA3 and check the data.
    • Go to transaction LBWE and make sure the update mode for the corresponding DataSource is serialized V3 update.
    • Go to BW system and create infopackage and under the update tab select the initialize delta process. And schedule the package. Now all the data available in the setup tables are now loaded into the data target.
    • Now for the delta records go to LBWE in R/3 and change the update mode for the corresponding DataSource to Direct/Queue delta. By doing this record will bypass SM13 and directly go to RSA7. Go to transaction code RSA7 there you can see green light # Once the new records are added immediately you can see the record in RSA7.
    • Go to BW system and create a new infopackage for delta loads. Double click on new infopackage. Under update tab you can see the delta update radio button.

    28) Now you can go to your data target and see the delta record.

    Why we have construct setup tables?

    Ans:

    • The R/3 database structure for accounting is much more  easier than the Logistical structure.
    • Once you post in a ledger that is done. You can correct, but that give just another posting.
    • BI can get information direct out of this (relatively) simple database structure.
    • In LO, you can have an order with multiple deliveries to more than one delivery addresses. And the payer can also be different.
    • When 1 item (order line) changes, this can have its reflection on order, supply, delivery, invoice, etc.
    • Therefore a special record structure is built for Logistical reports. and this structure now is used for BI.
    • In order to have this special structure filled with your starting position, you must run a set-up. From that moment on R/3 will keep filling this LO-database.If you wouldn’t run the setup. BI would start with data from the moment you start the filling of LO (with the logistical cockpit)

    29) How can you eliminate the duplicate records in TD, MD?

    Ans:

    Try to check the system logs through SM21 for the same.

    30) Explain the architecture of SAP BW system and its components

    Ans:

    • OLAP Processor
    • Metadata Repository,
    • Process designer and other functions.
    • Business Explorer BEx is reporting and analysis tool that support query, analysis and reporting functions in BI. Using BEx, you can analyze historical and current data to different degree of analysis.

    31) What is an Info Object and why it is used in SAP BI?

    Ans:

    • Info Objects are known as smallest unit in SAP BI and are used in Info Providers, DSOs, Multi providers, etc. Each Info Provider contains multiple Info Objects.
    • InfoObjects are used in reports to analyze the data stored and to provide information to decision makers.

    32) What are the different categories of InfoObjects in BW system?

    Ans:

    Info Objects can be categorized into below categories −

    • Characteristics like Customer, Product, etc.
    • Units like Quantity sold, currency, etc.
    • Key Figures like Total Revenue, Profit, etc.
    • Time characteristics like Year, quarter, etc.

    33) What is the use of Info area in SAP BW system?

    Ans:

    Info Area in SAP BI are used to group similar types of object together. Info Area are used to manage Info Cubes and Info Objects. Each Info Objects resides in an Info Area and you can define it a folder which is used to hold similar files together.

    34) How do you access to source system data in BI without extraction?

    Ans:

    To access data in BI source system directly. You can directly access to source system data in BI without extraction using Virtual Providers. Virtual providers can be defined as Info Providers where transactional data is not stored in the object. Virtual providers allow only read access on BI data.

    35) What are different types on Virtual providers?

    Ans:

    •     VirtualProviders based on DTP
    •            VirtualProviders with function modules
    •            VirtualProviders based on BAPIs

    36) Which Virtual Providers are used in which scenario of data extraction?

    Ans:

    VirtualProviders based on DTP −These types of Virtual Providers are based on the data source or an Info Provider and they take characteristics and key figures of source. Same extractors are used to select data in source system as you use to replicate data into BI system.

    Course Curriculum

    Get In-Depth Knowledge in SQL Query Training from Expert Trainers

    • Instructor-led Sessions
    • Real-life Case Studies
    • Assignments
    Explore Curriculum

    37)When to Virtual Providers based on DTP?

    Ans:

    When only some amount of data is used. You need to access up to date data from a SAP source system. Only few users executes queries simultaneously on the database.

    38) Virtual Provider with Function Module −

    Ans:

    This Virtual Provider is used to display data from non BI data source to BI without copying the data to BI structure. The data can be local or remote. This is used primarily for SEM application.

    39) What is the use of Transformation and how the mapping is done in BW?

    Ans:

    Transformation process is used to perform data consolidation, cleansing and data integration. When data is loaded from one BI object to other BI object, transformation is applied on the data. Transformation is used to convert a field of source into the target object format.

    Transformation rules −Transformation rules are used to map source fields and target fields. Different rule types can be used for transformation.

    40)How do perform real time data acquisition in BW system?

    Ans:

    • Real time data acquisition is based on moving data to Business Warehouse in real time. Data is sent to delta queue or PSA table in real time.
    • Real time data acquisition can be achieved in two scenarios −By using InfoPackage for real time data acquisition using Service API.
    • Using Web Service to load data to Persistent Storage Area PSA and then by using real time DTP to move the data to DSO.
    • Real time Data Acquisition Background Process −To process data to InfoPackage and data transfer process DTP at regular intervals, you can use a background process known as Daemon.
    • Daemon process gets all the information from InfoPackage and DTP that which data is to be transferred and which PSA and Data sore objects to be loaded with data.

    41) What is Info Object catalog?

    Ans:

    InfoObjects are created in Info Object catalog. It is possible that an Info Object can be assigned to different Info Catalog.

    42) What is the use DSO in BW system? What kind of data is stored in DSOs? What are the different components in DSO architecture?

    Ans:

    To access data for reporting and analysis immediately after it is loaded,

    43) What all data sources you have used to acquire data in SAP BW system?

    Ans:

    • SAP systems (SAP Applications/SAP ECC)
    • Relational Database (Oracle, SQL Server, etc.)
    • Flat File (Excel, Notepad)
    • Multidimensional Source systems (Universe using UDI connector)
    • Web Services that transfer data to BI by means of push

    44)When you are using SAP BI7.x, you can load the data to which component?

    Ans:

    In BW 3.5, you can load data in Persistence Staging Area and also in targets from source system but If you are using SAP BI 7.0 data load should be restricted to PSA only for latest versions.

    45) What is an InfoPackage?

    Ans:

    • An InfoPackage is used to specify how and when to load data to BI system from different data sources. An InfoPackage contains all the information how data is loaded from source system to a data source or PSA. InfoPackage consists of condition for requesting data from a source system.
    • Note that using an InfoPackage in BW 3.5, you can load data in Persistence Staging Area and also in targets from source system but If you are using SAP BI 7.0 data load should be restricted to PSA only for latest versions.

    46) What is extended Star schema? Which of the tables are inside and outside cube in an extended star schema?

    Ans:

    In Extended Star schema, Fact tables are connected to Dimension tables and dimension table is connected to SID table and SID table is connected to master data tables. In Extended star schema you have Fact and Dimension tables are inside the cube however SID tables are outside cube. When you load the transactional data into Info cube, Dim Ids are generated based on SIDs and these Dim ids are used in fact tables.

    47) How extended Star schema is different from Star schema?

    Ans:

    • In Extended Star schema one fact table can connect to 16 dimensions tables and each dimension table is assigned with 248 maximum SID tables. SID tables are also called Characteristics and each characteristic can have master data tables like ATTR, Text, etc.
    • In Star Schema, Each Dimension is joined to one single Fact table. Each Dimension is represented by only one dimension and is not further normalized.
    • Dimension Table contains set of attribute that are used to analyze the data.

    48)which Data store object is used?

    Ans:

    Data Store object for direct update allows you to access data for reporting and analysis immediately after it is loaded. It is different from standard DSOs in the way how it processed the data. Data is stored in same format in which it was loaded to DataStore object for direct update by the application.

    49) Explain the structure of direct update DSOs?

    Ans:

    one table for active data and no change log area exists. Data is retrieved from external systems using APIs.

    • Below APIs exists −RSDRI_ODSO_INSERT: These are used to insert new data.
    • RSDRI_ODSO_INSERT_RFC: Similar to RSDRI_ODSO_INSERT and can be called up remotely.
    • RSDRI_ODSO_MODIFY: This is used to insert data having new keys. For data with keys already in the system, the data is changed.
    • RSDRI_ODSO_MODIFY_RFC: Similar to RSDRI_ODSO_MODIFY and can be called up remotely.
    • RSDRI_ODSO_UPDATE: This API is used to update existing data.
    • RSDRI_ODSO_UPDATE_RFC: This is similar to RSDRI_ODSO_UPDATE and can be called up remotely.
    • RSDRI_ODSO_DELETE_RFC: This API is used to delete the data.

    50)Can we perform Delta uploads in direct update DSOs?

    Ans:

    As structure of this DSO contains one table for active data and no change log so this doesn’t allow delta update to Info Providers.

    51) What is write optimized DSOs?

    Ans:

    In Write optimized DSO, data that is loaded is available immediately for the further processing.

    52) Where do we use Write optimized DSOs?

    Ans:

    • Write optimized DSO provides a temporary storage area for large sets of data if you are executing complex transformations for this data before it is written to the DataStore object. The data can then be updated to further InfoProviders. You only have to create the complex transformations once for all data.
    • Write-optimized DataStore objects are used as the EDW layer for saving data. Business rules are only applied when the data is updated to additional InfoProviders.

    53) Explain the structure of Write optimized DSOs? How it is different from Standard DSOs?

    Ans:

    It only contains table of active data and there is no need to activate the data as required with standard DSO. This allows you to process the data more quickly.

    54) To perform a Join on dataset, what type of InfoProviders should be used?

    Ans:

    Infosets are defined as special type of InfoProviders where data sources contains Join rule on DataStore objects, standard InfoCubes or InfoObject with master data characteristics. InfoSets are used to join data and that data is used in BI system.

    55)What is a temporal join?

    Ans:

    Temporal Joins: are used to map a period of time. At the time of reporting, other InfoProviders handle time-dependent master data in such a way that the record that is valid for a pre-defined unique key date is used each time. You can define Temporal join that contains at least one time-dependent characteristic or a pseudo time-dependent InfoProvider.

    56) Where do we use InfoSet in BI system?

    Ans:

    • Infosets are used to analyze the data in multiple InfoProviders by combining master data charactertics, DataStore Objects, and InfoCubes.
    • You can use temporal join with InfoSet to specify a particular point of time when you want to evaluate the data.
    • You can use reporting using Business Explorer BEx on DSOs without enabling BEx indicator.
    Course Curriculum

    Enroll in SQL Query Certification Course to Build Your Skills & Advance Your Career

    Weekday / Weekend BatchesSee Batch Details

    57) What are the different types of InfoSet joins?

    Ans:

    • Inner Join
    • Left Outer Join
    • Temporal Join
    • Self Join

    58) What is the use of InfoCube in BW system?

    Ans:

    • InfoCube is defined as multidimensional dataset which is used for analysis in a BEx query. An InfoCube consists of set of relational tables which are logically joined to implement star schema. A Fact table in star schema is joined with multiple dimension tables.
    • You can add data from one or more InfoSource or InfoProviders to an InfoCube. They are available as InfoProviders for analysis and reporting purposes.

    59)What is the structure of InfoCube?

    Ans:

    • An InfoCube is used to store the data physically. It consists of a number of InfoObjects that are filled with data from staging. It has the structure of a star schema.
    • In SAP BI, an Infocube contains Extended Star Schema as shown above.
    • An InfoCube consists of a fact table which is surrounded by 16 dimension tables and master data that is lying outside the cube.

    60)What is the use of real time InfoCube? How do you enter data in real time InfoCubes?

    Ans:

    • Real time InfoCubes are used to support parallel write access. Real time InfoCubes are used in connection with the entry of planning data.
    • You can enter the data in Real time InfoCubes in two different ways −
    • Transaction for entering planning data
    • BI Staging

    61) How do you create a real time InfoCube in administrator workbench?

    Ans:

    A real time InfoCube can be created using Real Time Indicator check box.

    62) Can you make an InfoObject as info provider and why?

    Ans:

    Yes, when you want to report on charactertics or master data, you can make them as InfoProvider.

    63)Is it possible to convert a standard InfoCube to real time InfoCube?

    Ans:

    To convert a standard InfoCube to real time InfoCube, you have two options −

    • Convert with loss of Transactional data
    • Conversion with Retention of Transaction Data

    64) Can you convert an InfoPackage group into a Process chain?

    Ans:

    Yes, Double Click on the info package grp → Process Chain Maintenance button and type in the name and description.

    65) When you define aggregates, what are the available options?

    Ans:

    • H Hierarchy
    • F fixed value
    • Blank

    66) Can you setup InfoObjects as Virtual Providers?

    Ans:

    Yes.

    67) To perform a Union operation on InfoProviders, which InfoProvider is a use?

    Ans:

    MultiProvider

    68) Explain the different between Operation Data store, InfoCube and MultiProvider?

    Ans:

    • ODS −They provide granular data, allows overwrite and data is in transparent tables, ideal for drilldown and RRI.
    • InfoCube −This is used for star schema, we can only append data, ideal for primary reporting.
    • MultiProvider −It contains a physical data and allow to access data from different InfoProviders.

    69) What do you understand by Start and update routine?

    Ans:

    • Start Routines − The start routine is run for each Data Package after the data has been written to the PSA and before the transfer rules have been executed. It allows complex computations for a key figure or a characteristic. It has no return value. Its purpose is to execute preliminary calculations and to store them in global Data Structures. This structure or table can be accessed in the other routines. The entire Data Package in the transfer structure format is used as a parameter for the routine.
    • Update Routines − They are defined at the InfoObject level. It is like the Start Routine. It is independent of the DataSource. We can use this to define Global Data and Global Checks.

    70) What are the data types for the characteristics info object?

    Ans:

    There are 4 types

    • CHAR
    • NUMC
    • DATS
    • TIMS

    71) What is the use of the process chain?

    Ans:

    The use of the process chain is to automate the data load process. It automates the process like Data load, Indices creation, Deletion, Cube compression etc. Process chains are only to load your data’s.

    72) What are the transaction codes or T-codes for Info-objects?

    Ans:

    The T-codes for Info-Cubes are

    • LISTCUBE: List viewer for InfoCubes
    • LISTSCHEMA: Show InfoCube schema
    • RSDCUBE, RSDCUBED, RSDCUBEM: Start InfoCube editing

    73) What is the maximum number of key figures and characteristics?

    Ans:

    The maximum number of key figures is 233 and characteristics are 248.

    74) How can you convert an info package group into the process chain?

    Ans:

    You can convert package group into a process chain by double clicking on the info package group, then you have to click on the ‘ Process Chain Maint ‘ button where you have to type the name and description, this will insert individual info packages automatically.

    75) What is an InfoObject and why it is used in SAP BI?

    Ans:

    • Info Objects are known as smallest unit in SAP BI and are used in Info Providers, DSOs, Multi providers, etc. Each Info Provider contains multiple Info Objects.
    • InfoObjects are used in reports to analyze the data stored and to provide information to decision makers.

    76) What is SAP BW/BI? What is the purpose of SAP BW/BI?

    Ans:

    SAP BW/BI stands for Business Information Warehouse, also known as business intelligence. For any business, data reporting, analyses and interpretation of business data is very crucial for running business smoothly and making decision. SAP BW/BI manage the data and enables to react quickly and in line with the market. It enables the user to analyze data from operative SAP applications as well as from other business.

    SQL Sample Resumes! Download & Edit, Get Noticed by Top Employers! Download

    77) What are the main areas and activities in SAP/BI?

    Ans:

    • Data Warehouse: Integrating, collecting and managing entire companys data.
    • Analyzing and Planning: Using the data stored in data warehouse.
    • Broad cast publishing: To send the information to the employees using email, fax etc.
    • Reporting: BI provides the tools for reporting in web-browser, excel etc

    78) What is a ‘Fact Table’?

    Ans:

    Fact table is the collection of facts and relations that mean foreign keys with the dimension. Actually fact table holds transactional data.

    79)What is table partition?

    Ans:

    Table partition is done to manage the huge data to improve the efficiency of the applications. The partition is based on 0CALMONTH and 0FISCPER. There are two type of partitioning that is done

    • Database partitioning
    • Logical partitioning

    80) What are the features of multi provider?

    Ans:

    • It doesn’t contain any type of data.
    • The data comes completely from the information which is provided.
    • By union operations the info providers are connected.

    Data flows from a transactional system to the analytical system (BW). DS ( Data Service) on the transactional system needs to be replicated on BW side and attached to info source and update rules respectively.

    81)What is ODS (Operational Data Store)?

    Ans:

    ‘Operational Data Store’ or ‘ODS’ is used for detailed storage of data. It is a BW architectural component that appears between PSA ( Persistent Staging Area) and infocubes, it allows BEX (Business Explorer) reporting. It is primarily used for detail reporting rather than dimensional analysis, and it is not based on the star schema. ODS (Operational Data Store) objects do not aggregate data as infocubes do. To load the data into an IDS object, new records are inserted, existing records are updated, or old records are deleted as specified by RECORDMODE value.

    82) What are the extractors and mention their types?

    Ans:

    To extract data from the system program is used which is known as Extractor. The types of extractors in BW are:

    • Application Specific: BW content FI, HR, CO, SAP CRM, LO cockpit
    • Customer-Generated Extractors: LIS, FI-SL, CO-PA
    • Cross Application (Generic Extractors) : DB View,  Infoset, Function Module

    83) What is UD connect in SAP BW system? How does it allow reporting in BI system?

    Ans:

    • Universal data UD connect allows you to access Relational and multidimensional data sources and transfer the data in form of flat data. Multidimensional data is converted to flat format when Universal Data Connect is used for data transfer.
    • UD uses J2EE connector to allow reporting on SAP and non-SAP data. Different BI Java connectors are available for various drivers, protocols as resource adapters −
    • BI ODBO Connector
    • BI JDBC Connector
    • BI SAP Query Connector
    • XMLA Connector

    84)What is the difference between ODS and Info-cubes?

    Ans:

    The difference between ODS and Info-cubes are

    • ODS has a key while Info-cubes does not have any key

    ODS contains detailed level data while Info-cube contains refined data

    • Info-cube follows Star Schema (16 dimensions) while ODS is a flat file structure

    There can be two or more ODS under a cube, so cube can contain combined data or data that is derived from other fields in the ODS

    85)How many data types are there in characteristics info object? 

    Ans:

    There are 4 data types:

    • DATS
    • TIMS
    • CHAR
    • NUMC

    86) What are the T-codes for Info-cubes?

    Ans:

    The T-codes for Info-Cubes are

    • LISTSCHEMA: Show InfoCube schema
    • LISTCUBE: List viewer for InfoCubes
    • RSDCUBE, RSDCUBED, RSDCUBEM: Start InfoCube editing.

    87) Write the types of Multi-providers?

    Ans:

    The types of Multi-providers are:

    • Heterogeneous Multi providers: These info-providers only have a few number of characteristics and key figures. It can be used for the modelling of scenarios by dividing them into sub-scenarios. Each sub-scenario is represented by its own info-provider.
    • Homogeneous Multi providers: It consists of technically identical info-providers, such as infocubes with exactly the same characteristics and key figures.

    88) What do you understand by data target administration task? 

    Ans:

    Data target administration task includes

    • Complete deletion of data target
    • Construct database statistics
    • Generate Index
    • Delete Index

    89). What is B/W statistics and how it is used?

    Ans:

    The sets of cubes delivered by SAP are used to measure performance for query, loading data etc. B/W statistics as the name suggests is useful in showing data about the costs associated with the B/W queries, OLAP, aggregative data etc. It is useful to measure the performance of how quickly the queries are calculated or how quickly the data is loaded into BW.

    90) What is modeling?

    Ans:

    Designing of data base is done by using modeling. The design of DB (Data Base) depends on the schema, and schema is defined as the representation of tables and their relationship.

    Are you looking training with Right Jobs?

    Contact Us
    Get Training Quote for Free