25+ Tricky SAP BI Interview Questions with SMART ANSWERS
SAP BI Interview Questions and Answers

25+ Tricky SAP BI Interview Questions with SMART ANSWERS

Last updated on 04th Jul 2020, Blog, Interview Questions

About author

Ram (Sr SAP BI Director )

(5.0) | 16547 Ratings 859

These SAP BI Interview Questions have been designed specially to get you acquainted with the nature of questions you may encounter during your interview for the subject of SAP BI.As per my experience good interviewers hardly plan to ask any particular question during your interview, normally questions start with some basic concept of the subject and later they continue based on further discussion and what you answer.we are going to cover top 100 SAP BI Interview Questions along with their detailed answers. We will be covering SAP BI  scenario based interview questions,SAP BI Interview Questions for freshers as well as SAP BI interview questions and answers for experienced. 

Q1. Define multi-provider in SAP BI ? What are the features of Multiproviders?

Ans:

Multi-provider is a type of info-provider that contains data from a number of info-providers and makes it available for reporting purposes:

  • Multi-provider does not contain any data.
  • The data comes entirely from the info providers on which it is based.
  • The info-providers are connected to one another by union operations.
  • Info-providers and Multi-providers are the objects or views relevant for reporting.
  • A multi-provider allows you to run reports using several info-providers that are, it is used for creating reports for one or more than one info-provider at a time.

Q2 .What is BEx Map in SAP BI?

Ans:

BEx Map is BWs Geographical Information System (GIS). BEx Map is one of the characteristics for SAP BI, and it gives the geographical information like customer, customer sales region and country.

Q3. What is the t-code to see log of transport connection?

Ans:

In RSA1: Transport Connection you can collect the Queries and the Role and after this you can transport them (enabling the transport in SE10, import it in STMS

RSA1:

  • Transport connection (button on the left bar menu)
  • Sap transport ->Object Types (button on the left bar menu)
  • Find Query Elements ->Query
  • Find your query
  • Group necessary object
  • Transport Object (car icon)
  • Release transport (SE10 T-code)
  • Load transport (STMS T-code)

Q4. Lo; mm inventory data source with marker significance?

Ans:

Marker is as like check point when u upload the data from inventory data source

2lis_03_bx data source for current stock and BF for movement type

after uploading data from BX u should rlise the request in cube or imenn to say compress it then load data from another data source BF and set this updated data to no marker update so marker is use as a check point if u dont do this u getting data mismatch at bex level bcz system get confuse.

(2LIS_03_BF Goods Movement from Inventory Management– —–Uncheck the no marker update tab)

(2LIS_03_BX Stock Initialization for Inventory Management– —select the no marker update check box)

2LIS_03_UM Revaluations —-Uncheck the no marker update tab) in the infopackege of collapse[sociallocker]

Q5. How can you navigate to see the error idocs?

Ans:

If it is fine check the IDOCs in source system go to BD87->give Ur user ID and date->execute->you can find Red status Idocs select the erroneous Idoc->Rt.click and select Manual process.

You need to reprocess this IDOC which are RED. For this you can take help of Any of your Team (ALE IDOC Team or Basis Team) Or Else

you can push it manually. Just search it in bd87 screen only to reprocess.

Also, try to find why this Idocs are stuck there.

Q6. Difference between v1, v2, v3 jobs in extraction?

Ans:

  • V1 Update: whenever we create a transaction in R/3(e.g.Sales Order) then the entries get into the R/3 Tables (VBAK, VBAP..) and this takes place in V1 Update.
  • V2 Update: V2 Update starts a few seconds after V1 Update and in this update the values get into Statistical Tables, from where we do the extraction into BW.
  • V3 Update: Its purely for BW extraction.

Q7. What are statistical update and document update?

Ans:

Synchronous Updating (V1 Update): The statistics update is made synchronously with the document update. While updating, if problems that result in the termination of the statistics update occur, the original documents are NOT saved. The cause of the termination should be investigated and the problem solved.Subsequently, the documents can be entered again.Radio button: V2 updating

Q8. Do you have any idea how to improve the performance of the BW?

Ans:

Asynchronous Updating (V2 Update): With this update type, the document update is made separately from the statistics update. A termination of the statistics update has NO influence on the document update (see V1 Update).

Radio button: Updating in U3 update program

Asynchronous Updating (V3 Update): With this update type, updating is made separately from the document update. The difference between this update type and the V2 Update lies, however, with the time schedule. If the V3 update is active, then the update can be executed at a later time.

In contrast to V1 and V2 Updates, no single documents are updated. The V3 update is, therefore, also described as a collective update.

Q9. How can you decide the query performance is slow or fast?

Ans:

  • You can check that in RSRT tcode.
  • Execute the query in RSRT and after that follow the below steps
  • Go to SE16 and in the resulting screen give table name as RSDDSTAT for BW 3.x and RSDDSTAT_DM for BI 7.0 and press enter you can view all the details about the query like time taken to execute the query and the timestamps.

Q10. What is statistical setup and what is the need and why?

Ans:

Follow these steps to filling the set up table:

  • Go to transaction code RSA3 and see if any data is available related to your DataSource. If data is there in RSA3 then go to transaction code LBWG (Delete Setup data) and delete the data by entering the application name.
  • Go to transaction SBIW –>Settings for Application Specific Datasource –>Logistics –>Managing extract structures –>Initialization –>Filling the Setup table –>Application specific setup of statistical data –>perform setup (relevant application)
  • In OLI*** (for example OLI7BW for Statistical setup for old documents: Orders) give the name of the run and execute. Now all the available records from R/3 will be loaded to setup tables.
  • Go to transaction RSA3 and check the data.
  • Go to transaction LBWE and make sure the update mode for the corresponding DataSource is serialized V3 update.
  • Go to BW system and create infopackage and under the update tab select the initialize delta process. And schedule the package. Now all the data available in the setup tables are now loaded into the data target.
  • Now for the delta records go to LBWE in R/3 and change the update mode for the corresponding DataSource to Direct/Queue delta. By doing this record will bypass SM13 and directly go to RSA7. Go to transaction code RSA7 there you can see green light # Once the new records are added immediately you can see the record in RSA7.
  • Go to BW system and create a new infopackage for delta loads. Double click on new infopackage. Under update tab you can see the delta update radio button..
  • Now you can go to your data target and see the delta record.

Q11. Why we have construct setup tables?

Ans:

  • The R/3 database structure for accounting is much more easier than the Logistical structure.
  • Once you post in a ledger that is done. You can correct, but that give just another posting.
  • BI can get information direct out of this (relatively) simple database structure.
  • In LO, you can have an order with multiple deliveries to more than one delivery addresses. And the payer can also be different.
  • When 1 item (order line) changes, this can have its reflection on order, supply, delivery, invoice, etc.
  • Therefore a special record structure is built for Logistical reports. and this structure now is used for BI.
  • In order to have this special structure filled with your starting position, you must run a set-up. From that moment on R/3 will keep filling this LO-database.If you wouldnt run the setup. BI would start with data from the moment you start the filling of LO (with the logistical cockpit)

Q12. How can you eliminate the duplicate records in TD, MD?

Ans:

Try to check the system logs through SM21 for the same.

Q13. Explain the architecture of SAP BW system and its components?

Ans:

  • OLAP Processor
  • Metadata Repository,
  • Process designer and other functions.

Business Explorer BEx is reporting and analysis tool that support query, analysis and reporting functions in BI. Using BEx, you can analyze historical and current data to different degree of analysis.

Q14. What is an InfoObject and why it is used in SAP BI?

Ans:

  • Info Objects are known as smallest unit in SAP BI and are used in Info Providers, DSOs, Multi providers, etc. Each Info Provider contains multiple Info Objects.
  • InfoObjects are used in reports to analyze the data stored and to provide information to decision makers.

Q15. What are the different categories of InfoObjects in BW system?

Ans:

Info Objects can be categorized into below categories:

  • Characteristics like Customer, Product, etc.
  • Units like Quantity sold, currency, etc.
  • Key Figures like Total Revenue, Profit, etc.
  • Time characteristics like Year, quarter, etc.

Q16. What is the use of Infoarea in SAP BW system?

Ans:

Info Area in SAP BI are used to group similar types of object together. Info Area are used to manage Info Cubes and Info Objects. Each Info Objects resides in an Info Area and you can define it a folder which is used to hold similar files together.

Q17. How do you access to source system data in BI without extraction?

Ans:

To access data in BI source system directly. You can directly access to source system data in BI without extraction using Virtual Providers. Virtual providers can be defined as InfoProviders where transactional data is not stored in the object. Virtual providers allow only read access on BI data.

Q18. What are different types on Virtual providers?

Ans:

  • Virtual Providers based on DTP
  • Virtual Providers with function modules
  • Virtual Providers based on BAPIs

Q19. Which Virtual Providers are used in which scenario of data extraction?

Ans:

Virtual Providers based on DTP : This type of Virtual Providers are based on the data source or an Info Provider and they take characteristics and key figures of source. Same extractors are used to select data in source system as you use to replicate data into BI system.

Q20. When to Virtual Providers based on DTP?

Ans:

  • When only some amount of data is used.
  • You need to access up to date data from a SAP source system.
  • Only few users executes queries simultaneously on the database.

    Subscribe For Free Demo

    Q21. Virtual Provider with Function Module?

    Ans:

    This Virtual Provider is used to display data from non BI data source to BI without copying the data to BI structure. The data can be local or remote. This is used primarily for SEM application.

    Q22. What is the use of Transformation and how the mapping is done in BW?

    Ans:

    Transformation process is used to perform data consolidation, cleansing and data integration. When data is loaded from one BI object to other BI object, transformation is applied on the data. Transformation is used to convert a field of source into the target object format.

    Transformation rules: Transformation rules are used to map source fields and target fields. Different rule types can be used for transformation.

    Q23. How do perform real time data acquisition in BW system?

    Ans:

    Real time data acquisition is based on moving data to Business Warehouse in real time. Data is sent to delta queue or PSA table in real time.

    Real time data acquisition can be achieved in two scenarios:

    • By using InfoPackage for real time data acquisition using Service API.
    • Using Web Service to load data to Persistent Storage Area PSA and then by using real time DTP to move the data to DSO.

    Real time Data Acquisition Background Process:

    • To process data to InfoPackage and data transfer process DTP at regular intervals, you can use a background process known as Daemon.
    • Daemon process gets all the information from InfoPackage and DTP that which data is to be transferred and which PSA and Data sore objects to be loaded with data.

    Q24. What is the use DSO in BW system? What kind of data is stored in DSOsWhat are the different components in DSO architecture?

    Ans:

    To access data for reporting and analysis immediately after it is loaded.

    Q25. What all data sources you have used to acquire data in SAP BW system?

    Ans:

    • SAP systems (SAP Applications/SAP ECC)
    • Relational Database (Oracle, SQL Server, etc.)
    • Flat File (Excel, Notepad)
    • Multidimensional Source systems (Universe using UDI connector)
    • Web Services that transfer data to BI by means of push

    Q26. When you are using SAP BI7.x, you can load the data to which component?

    Ans:

    In BW 3.5, you can load data in Persistence Staging Area and also in targets from source system but If you are using SAP BI 7.0 data load should be restricted to PSA only for latest versions.

    Q27. What is an InfoPackage?

    Ans:

    An InfoPackage is used to specify how and when to load data to BI system from different data sources. An InfoPackage contains all the information how data is loaded from source system to a data source or PSA. InfoPackage consists of condition for requesting data from a source system.

    Note: that using an InfoPackage in BW 3.5, you can load data in Persistence Staging Area and also in targets from source system but If you are using SAP BI 7.0 data load should be restricted to PSA only for latest versions.

    Q28. What is extended Star schema? Which of the tables are inside and outside cube in an extended star schema?

    Ans:

    In Extended Star schema, Fact tables are connected to Dimension tables and dimension table is connected to SID table and SID table is connected to master data tables. In Extended star schema you have Fact and Dimension tables are inside the cube however SID tables are outside cube. When you load the transactional data into Info cube, Dim Ids are generated based on SIDs and these Dim ids are used in fact tables.

    Q29. How extended Star schema is different from Star schema?

    Ans:

    In Extended Star schema one fact table can connect to 16 dimensions tables and each dimension table is assigned with 248 maximum SID tables. SID tables are also called Characteristics and each characteristic can have master data tables like ATTR, Text, etc.

    In Star Schema, Each Dimension is joined to one single Fact table. Each Dimension is represented by only one dimension and is not further normalized.

    Dimension Table contains set of attribute that are used to analyze the data.

    Q30. which Data store object is used?

    Ans:

    DataStore object for direct update allows you to access data for reporting and analysis immediately after it is loaded. It is different from standard DSOs in the way how it processed the data. Data is stored in same format in which it was loaded to DataStore object for direct update by the application.

    Course Curriculum

    Get In-Depth Knowledge in SAP BI Training From Expert Trainers

    • Instructor-led Sessions
    • Real-life Case Studies
    • Assignments
    Explore Curriculum

    Q31. Explain the structure of direct update DSOs?

    Ans:

    One table for active data and no change log area exists. Data is retrieved from external systems using APIs.

    Below APIs exists :

    • RSDRI_ODSO_INSERT: These are used to insert new data.
    • RSDRI_ODSO_INSERT_RFC: Similar to RSDRI_ODSO_INSERT and can be called up remotely.
    • RSDRI_ODSO_MODIFY: This is used to insert data having new keys.For data with keys already in the system, the data is changed.
    • RSDRI_ODSO_MODIFY_RFC: Similar to RSDRI_ODSO_MODIFY and can be called up remotely.
    • RSDRI_ODSO_UPDATE: This API is used to update existing data.
    • RSDRI_ODSO_UPDATE_RFC: This is similar to RSDRI_ODSO_UPDATE and can be called up remotely.
    • RSDRI_ODSO_DELETE_RFC: This API is used to delete the data.

    Q32. Can we perform Delta uploads in direct update DSOs?

    Ans:

    As structure of this DSO contains one table for active data and no change log so this doesnt allow delta update to InfoProviders.

    Q33. What is write optimized DSOs?

    Ans:

    In Write optimized DSO, data that is loaded is available immediately for the further processing.

    Q34. Where do we use Write optimized DSOs?

    Ans:

    Write optimized DSO provides a temporary storage area for large sets of data if you are executing complex transformations for this data before it is written to the DataStore object. The data can then be updated to further InfoProviders. You only have to create the complex transformations once for all data.

    Write-optimized DataStore objects are used as the EDW layer for saving data. Business rules are only applied when the data is updated to additional InfoProviders.

    Q35. Explain the structure of Write optimized DSOs? How it is different from Standard DSOs?

    Ans:

    It only contains table of active data and there is no need to activate the data as required with standard DSO. This allows you to process the data more quickly.

    Q36. To perform a Join on dataset, what type of InfoProviders should be used?

    Ans:

    Infosets are defined as special type of InfoProviders where data sources contains Join rule on DataStore objects, standard InfoCubes or InfoObject with master data characteristics. InfoSets are used to join data and that data is used in BI system.

    Q37. What is a temporal join?

    Ans:

    Temporal Joins: are used to map a period of time. At the time of reporting, other InfoProviders handle time-dependent master data in such a way that the record that is valid for a pre-defined unique key date is used each time. You can define Temporal join that contains atleast one time-dependent characteristic or a pseudo time-dependent InfoProvider.

    Q38. Where do we use InfoSet in BI system?

    Ans:

    Infosets are used to analyze the data in multiple InfoProviders by combining master data charactertics, DataStore Objects, and InfoCubes.

    You can use temporal join with InfoSet to specify a particular point of time when you want to evaluate the data.

    You can use reporting using Business Explorer BEx on DSOs without enabling BEx indicator.

    Q39. What are the different type of InfoSet joins?

    Ans:

    • Inner Join
    • Left Outer Join
    • Temporal Join
    • Self Join

    Q40. What is the use of InfoCube in BW system?

    Ans:

    InfoCube is defined as multidimensional dataset which is used for analysis in a BEx query. An InfoCube consists of set of relational tables which are logically joined to implement star schema. A Fact table in star schema is joined with multiple dimension tables.

    You can add data from one or more InfoSource or InfoProviders to an InfoCube. They are available as InfoProviders for analysis and reporting purposes.

    Q41. What is the structure of InfoCube?

    Ans:

    An InfoCube is used to store the data physically. It consists of a number of InfoObjects that are filled with data from staging. It has the structure of a star schema.

    In SAP BI, an Infocube contains Extended Star Schema as shown above.

    An InfoCube consists of a fact table which is surrounded by 16 dimension tables and master data that is lying outside the cube.

    Q42. What is the use of real time InfoCube? How do you enter data in real time InfoCubes?

    Ans:

    Real time InfoCubes are used to support parallel write access. Real time InfoCubes are used in connection with the entry of planning data.

    You can enter the data in Real time InfoCubes in two different ways :

    • Transaction for entering planning data
    • BI Staging

    Q43. How do you create a real time InfoCube in administrator workbench?

    Ans:

    A real time InfoCube can be created using Real Time Indicator check box.

    Q44. Can you make an InfoObject as info provider and why?

    Ans:

    Yes, when you want to report on charactertics or master data, you can make them as InfoProvider.

    Q45. Is it possible to convert a standard InfoCube to real time InfoCube?

    Ans:

    To convert a standard InfoCube to real time InfoCube, you have two options :

    • Convert with loss of Transnational data
    • Conversion with Retention of Transaction Data

    Q46. Can you convert an InfoPackage group into a Process chain?

    Ans:

    Yes, Double Click on the info package grp → Process Chain Maintenance button and type in the name and description.

    Q47. When you define aggregates, what are the available options?

    Ans:

    • H Hierarchy
    • F fixed value
    • Blank

    Q48. Can you setup InfoObjects as Virtual Providers?

    Ans:

    Yes.

    Q49. To perform a Union operation on InfoProviders, which InfoProvider is uses?

    Ans:

    MultiProvider

    Q50.Explain the different between Operation Data store, InfoCube and MultiProvider?

    Ans:

    • ODS : They provide granular data, allows overwrite and data is in transparent tables, ideal for drilldown and RRI.
    • InfoCube: This is used for star schema, we can only append data, ideal for primary reporting.
    • MultiProvider: It contains a physical data and allow to access data from different InfoProviders.
    Course Curriculum

    Enroll in SAP BI Certification Course to Build Your Skills & Advance Your Career

    Weekday / Weekend BatchesSee Batch Details

    Q51. What do you understand by Start and update routine?

    Ans:

    • Start Routines: The start routine is run for each Data Package after the data has been written to the PSA and before the transfer rules have been executed. It allows complex computations for a key figure or a characteristic. It has no return value. Its purpose is to execute preliminary calculations and to store them in global Data Structures. This structure or table can be accessed in the other routines. The entire Data Package in the transfer structure format is used as a parameter for the routine.
    • Update Routines: They are defined at the InfoObject level. It is like the Start Routine. It is independent of the DataSource. We can use this to define Global Data and Global Checks.

    Q52. What is SAP BW/BI? What is the purpose of SAP BW/BI?

    Ans:

    SAP BW/BI stands for Business Information Warehouse, also known as business intelligence. For any business, data reporting, analyses and interpretation of business data is very crucial for running business smoothly and making decision. SAP BW/BI manage the data and enables to react quickly and in line with the market. It enables the user to analyze data from operative SAP applications as well as from other business.

    Q53. What are the main areas and activities in SAP/BI?

    Ans:

    • Data Warehouse: Integrating, collecting and managing entire companys data.
    • Analyzing and Planning: Using the data stored in data warehouse.
    • Broad cast publishing: To send the information to the employees using email, fax etc.
    • Reporting: BI provides the tools for reporting in web-browser, excel etc

    Q54. What is table partition?

    Ans:

    Table partition is done to manage the huge data to improve the efficiency of the applications. The partition is based on 0CALMONTH and 0FISCPER.

    There are two type of partitioning that is done:

    • Database partitioning
    • Logical partitioning

    Q55. What is ODS (Operational Data Store)?

    Ans:

    ‘Operational Data Store’ or ‘ODS’ is used for detailed storage of data. It is a BW architectural component that appears between PSA ( Persistent Staging Area) and infocubes, it allows BEX (Business Explorer) reporting. It is primarily used for detail reporting rather than dimensional analysis, and it is not based on the star schema. ODS (Operational Data Store) objects do not aggregate data as infocubes do. To load the data into an IDS object, new records are inserted, existing records are updated, or old records are deleted as specified by RECORDMODE value.

    Q56. What is UD connect in SAP BW system? How does it allow reporting in BI system?

    Ans:

    Universal data UD connect allows you to access Relational and multidimensional data sources and transfer the data in form of flat data. Multidimensional data is converted to flat format when Universal Data Connect is used for data transfer.

    UD uses J2EE connector to allow reporting on SAP and non-SAP data. Different BI Java connectors are available for various drivers, protocols as resource adapters −

    • BI ODBO Connector
    • BI JDBC Connector
    • BI SAP Query Connector
    • XMLA Connector

    Q57. How many data types are there in characteristics info object?

    Ans:

    There are 4 data types:

    • DATS
    •   TIMS
    • CHAR
    • NUMC

    Q58. What are the T-codes for Info-cubes?

    Ans:

    The T-codes for Info-Cubes are:

    • LISTSCHEMA: Show InfoCube schema
    • LISTCUBE: List viewer for InfoCubes
    • RSDCUBE, RSDCUBED, RSDCUBEM: Start InfoCube editing.

    Q59. Write the types of Multi-providers?

    Ans:

    The types of Multi-providers are:

    • Heterogeneous Multi providers: These info-providers only have a few number of characteristics and key figures. It can be used for the modelling of scenarios by dividing them into sub-scenarios. Each sub-scenario is represented by its own info-provider.
    • Homogeneous Multi providers: It consists of technically identical info-providers, such as infocubes with exactly the same characteristics and key figures.

    Q60. What do you understand by data target administration task?

    Ans:

    Data target administration task includes:

    • Complete deletion of data target
    • Construct database statistics
    • Generate Index
    • Delete Index

    Q61. What is B/W statistics and how it is used?

    Ans:

    The sets of cubes delivered by SAP are used to measure performance for query, loading data etc. B/W statistics as the name suggests is useful in showing data about the costs associated with the B/W queries, OLAP, aggregative data etc. It is useful to measure the performance of how quickly the queries are calculated or how quickly the data is loaded into BW.

    Q62. UDC (Universal Data Connect) enhances SAP NetWeaver BI by supporting integration and connectivity to a wide range of data sources. Which of the following statements are true statements regarding UDC?

    Ans:

    • UDC integration with JDBC (Java Database Connection) only supports persistent data handling: data that is physically stored on BW.
    • UDC integration with JDBC (Java Database Connection) supports transient data handling: direct access to data that is stored in the source system.
    • Creation of UDC DataSources takes place in the Data Warehousing Workbench of SAP NetWeaver BI.
    • InfoPackages are scheduled for UDC DataSources even though the BI Java Connectors push data to the PSA tables. 

    Q63. What is the content of a fact table?

    Ans:

    • The key figures for a combination of characteristic SID values of the dimensions are stored in the fact table.
    • Changes to master data tables are only possible if new dimension keys are created in the fact table.
    • Both cumulative values and also key figures for non-cumulative values can be contained in the fact table. 

    Q64. The cell editor…?

    Ans:

    • is always available to make cell-specific definitions in the query definition.
    • is available to make cell-specific definitions in the query definition when two structures make up the definition.
    • allows you to define cells that have no direct reference to the associated structural components. 
    • allows you to determine the format of a cell in the BEx Report Designer.

    Q65. Which BI objects are essential for extracting and loading data from a source system into an InfoCube when using the new data flow technology in SAP NetWeaver 7.0 BI?

    Ans:

    • Transfer Rule
    • Data Source
    • Transformation
    • Info Source
    • Data Transfer Process 

    Q66. A Pre-Query is used…?

    Ans:

    • to pre-calculate the query result, so that the report is displayed more quickly at runtime.
    • to fill a characteristic value variable used in the result set query with the pre-query values via the replacement path.
    • to pre-calculate the results set for the Chart web item in order to determine if the chosen chart type can be displayed. 

    Q67. Display attributes…?

    Ans:

    • primarily serve as supplemental information for the carrying characteristic
    • can be used for navigating in a report.
    • can be used to sort in a report.
    • can be used in the query definition to filter individual values of the attribute.
    • can be used as navigation attributes in the query definition. 

    Q68. The key date of a query…?

    Ans:

    There are 2 correct answers to this question. Select which of the following answers are correct:

    • used for restricting the time characteristics contained in a query.
    • is a decisive factor for the display of time-dependent texts and attributes, independent of the selected transaction data.
    • can be determined flexibly at the report runtime by using a variable
    • can also enable you to display the progression of attribute values in the selected time period using a variable that represents a time interval.

    Q69. Which of the following statements regarding exception aggregation are correct?

    Ans:

    • Exception aggregation is a feature that can be used in the definition of calculated key figures .
    • Exception aggregation is used to improve the response time of the query results.
    • The result of an exception aggregation can be used in another exception aggregation.
    • Exception aggregations are useful for highlighting important query results using color. 

    Q70. Which of the following statements is correct regarding the BEx Broadcaster and external sources of data?

    Ans:

    • You can broadcast external, non – SAP data sources directly as long as they are read through the staging BAPI.
    • The external data sources must be multi-dimensional sources.
    • The external data sources must be mapped through a VirtualProvider.
    • External data sources are read through the Java connectors XMLA or ODBO.
    • Data integrity is to eliminate duplicate entries in the database.
    SAP BI BW Sample Resumes! Download & Edit, Get Noticed by Top Employers! Download

    Q71. What is data flow in BW/BI?

    Ans:

    Data flows from a transactional system to the analytical system (BW). DS ( Data Service) on the transactional system needs to be replicated on BW side and attached to infosource and update rules respectively.

    Q72. What is an ‘Infocube’?

    Ans:

    ‘Infocube’ is structured as the star schema and it is a data storage area. To create an infocube, you require 1 ‘fact table’ surrounded by 4 dimensions. The ‘fact table’ is surrounded by different dim table, which are linked with DIM’ ids. And as per the data, you will have aggregated data in the cubes.

    Q73. How many tables does info cube contain?

    Ans:

    Info cubes contain two tables, Fact table and Dimensions table.

    Q74. Mention what are the maximum number of dimensions in info cubes?

    Ans:

    In info cubes, there are 16 dimensions ( 3 sap defined and 1 customer defined)

    Q75. What is the dimension in BW? How would you optimize the dimensions?

    Ans:

    A dimension in BW is a collection of reference information about a measurable event in data warehousing. In this context, events are known as “facts”. For example, a customer dimension’s attributes could include first and last name, gender, birth date etc. To optimize the dimensions, do not add most dynamic characteristics into the same dimension and make the dimension smaller. Also, define as many dimensions as possible, and the dimension should not exceed 20% of the fact table size.

    Q76. What are info objects?

    Ans:

    Characteristics and key figures will be called as info objects. ‘Info-objects’ are similar to fields of the source system, data based on which we organize data in different info provider in BW.

    Are you looking training with Right Jobs?

    Contact Us
    Get Training Quote for Free