25+ Tricky SAP BI/BW Interview Questions with SMART ANSWERS
SAP BI BW Interview Questions and Answers

SAP BI/BW Interview Questions and Answers

Last updated on 03rd Jul 2020, Blog, Interview Questions

About author

Arunkumar (Sr SAP Director )

Highly Expertise in Respective Industry Domain with 9+ Years of Experience Also, He is a Technical Blog Writer for Past 4 Years to Renders A Kind Of Informative Knowledge for JOB Seeker

(5.0) | 16547 Ratings 11182

SAP Business Warehouse (BW) integrates data from different sources, transforms and consolidates the data, does data cleansing, and storing of data as well. It also includes data modeling, administration and staging area.

SAP Business Intelligence (BI) means analyzing and reporting of data from different heterogeneous data sources. It allows you to acquire data from multiple data sources, data staging, which can be further distributed to different BI systems. A SAP Business Intelligence system can work as a target system for data transfer or source system for distribution of data to different BI targets.

1) What is SAP BW/BI? What is the purpose of SAP BW/BI?

Ans:

SAP BW/BI stands for Business Information Warehouse, also known as business intelligence. For any business, data reporting, analyses and interpretation of business data is very crucial for running business smoothly and making decision. SAP BW/BI manage the data and enables to react quickly and in line with the market. It enables the user to analyze data from operative SAP applications as well as from other business.

2) What is data Integrity?

Ans:

Data integrity is to eliminate duplicate entries in the database.

3) What is table partition?

Ans:

Table partition is done to manage the huge data to improve the efficiency of the applications. The partition is based on 0CALMONTH and 0FISCPER. There are two type of partitioning that is done

  • Database partitioning
  • Logical partitioning

4) What is data flow in BW/BI?

Ans:

Data flows from a transactional system to the analytical system (BW). DS ( Data Service) on the transactional system needs to be replicated on BW side and attached to infosource and update rules respectively.

5) What is ODS (Operational Data Store)?

Ans:

‘Operational Data Store’ or ‘ODS’ is used for detailed storage of data. It is a BW architectural component that appears between PSA ( Persistent Staging Area) and infocubes, it allows BEX (Business Explorer) reporting. It is primarily used for detail reporting rather than dimensional analysis, and it is not based on the star schema. ODS (Operational Data Store) objects do not aggregate data as infocubes do. To load the data into an IDS object, new records are inserted, existing records are updated, or old records are deleted as specified by RECORDMODE value.

6) What is an ‘Infocube’?

Ans:

‘Infocube’ is structured as the star schema and it is a data storage area. To create an infocube, you require 1 ‘fact table’ surrounded by 4 dimensions. The ‘fact table’ is surrounded by different dim table, which are linked with DIM’ ids. And as per the data, you will have aggregated data in the cubes.

7) How many tables does info cube contain?

Ans:

Info cubes contain two tables, Fact table and Dimensions table.

8) Mention what are the maximum number of dimensions in info cubes?

Ans:

info cubes, there are 16 dimensions

9) Explain the architecture of SAP BW system and its components?

Ans:

OLAP Processor

Metadata Repository,

Process designer and other functions.

Business Explorer BEx is reporting and analysis tool that support query, analysis and reporting functions in BI. Using BEx, you can analyze historical and current data to different degree of analysis.

10) What all data sources you have used to acquire data in SAP BW system?

Ans:

  • SAP systems (SAP Applications/SAP ECC)
  • Relational Database (Oracle, SQL Server, etc.)
  • Flat File (Excel, Notepad)
  • Multidimensional Source systems (Universe using UDI connector)
  • Web Services that transfer data to BI by means of push

11) When you are using SAP BI7.x, you can load the data to which component?

Ans:

In BW 3.5, you can load data in Persistence Staging Area and also in targets from source system but If you are using SAP BI 7.0 data load should be restricted to PSA only for latest versions.

12) What is an InfoPackage?

Ans:

An InfoPackage is used to specify how and when to load data to BI system from different data sources. An InfoPackage contains all the information how data is loaded from source system to a data source or PSA. InfoPackage consists of condition for requesting data from a source system.

Note that using an InfoPackage in BW 3.5, you can load data in Persistence Staging Area and also in targets from source system but If you are using SAP BI 7.0 data load should be restricted to PSA only for latest versions.

13) What is extended Star schema? Which of the tables are inside and outside cube in an extended star schema?

Ans:

In Extended Star schema, Fact tables are connected to Dimension tables and dimension table is connected to SID table and SID table is connected to master data tables. In Extended star schema you have Fact and Dimension tables are inside the cube however SID tables are outside cube. When you load the transactional data into Info cube, Dim Id’s are generated based on SID’s and these Dim id’s are used in fact tables.

14) How extended Star schema is different from Star schema?

Ans:

In Extended Star schema one fact table can connect to 16 dimensions tables and each dimension table is assigned with 248 maximum SID tables. SID tables are also called Characteristics and each characteristic can have master data tables like ATTR, Text, etc.

In Star Schema, Each Dimension is joined to one single Fact table. Each Dimension is represented by only one dimension and is not further normalized.

Dimension Table contains set of attribute that are used to analyze the data.

15) What is an InfoObject and why it is used in SAP BI?

Ans:

Info Objects are known as smallest unit in SAP BI and are used in Info Providers, DSO’s, Multi providers, etc. Each Info Provider contains multiple Info Objects.

InfoObjects are used in reports to analyze the data stored and to provide information to decision makers.

16) What are the different categories of InfoObjects in BW system?

Ans:

Info Objects can be categorized into below categories −

  • Characteristics like Customer, Product, etc.
  • Units like Quantity sold, currency, etc.
  • Key Figures like Total Revenue, Profit, etc.
  • Time characteristics like Year, quarter, etc.

17) What is the use of Infoarea in SAP BW system?

Ans:

Info Area in SAP BI are used to group similar types of object together. Info Area are used to manage Info Cubes and Info Objects. Each Info Objects resides in an Info Area and you can define it a folder which is used to hold similar files together.

18) How do you access to source system data in BI without extraction?

Ans:

To access data in BI source system directly. You can directly access to source system data in BI without extraction using Virtual Providers. Virtual providers can be defined as InfoProviders where transactional data is not stored in the object. Virtual providers allow only read access on BI data.

19) What are different types on Virtual providers?

Ans:

VirtualProviders based on DTP

VirtualProviders with function modules

VirtualProviders based on BAPI’s

20) Which Virtual Providers are used in which scenario of data extraction?

Ans:

VirtualProviders based on DTP −

This type of Virtual Providers are based on the data source or an Info Provider and they take characteristics and key figures of source. Same extractors are used to select data in source system as you use to replicate data into BI system.

When to Virtual Providers based on DTP?

When only some amount of data is used.

You need to access up to date data from a SAP source system.

Only few users executes queries simultaneously on the database.

Virtual Provider with Function Module −

This Virtual Provider is used to display data from non BI data source to BI without copying the data to BI structure. The data can be local or remote. This is used primarily for SEM application.

    Subscribe For Free Demo

    21) What is the use of Transformation and how the mapping is done in BW?

    Ans:

    Transformation process is used to perform data consolidation, cleansing and data integration. When data is loaded from one BI object to other BI object, transformation is applied on the data. Transformation is used to convert a field of source into the target object format.

    Transformation rules −

    Transformation rules are used to map source fields and target fields. Different rule types can be used for transformation.

    22) How do perform real time data acquisition in BW system?

    Ans:

    Real time data acquisition is based on moving data to Business Warehouse in real time. Data is sent to delta queue or PSA table in real time.

    Real time data acquisition can be achieved in two scenarios −

    By using InfoPackage for real time data acquisition using Service API.

    Using Web Service to load data to Persistent Storage Area PSA and then by using real time DTP to move the data to DSO.

    Real time Data Acquisition Background Process −

    To process data to InfoPackage and data transfer process DTP at regular intervals, you can use a background process known as Daemon.

    Daemon process gets all the information from InfoPackage and DTP that which data is to be transferred and which PSA and Data sore objects to be loaded with data.

    23) What is InfoObject catalog?

    Ans:

    InfoObjects are created in Info Object catalog. It is possible that an Info Object can be assigned to different Info Catalog.

    24) What is the use DSO in BW system? What kind of data is stored in DSO’s

    Ans:

    A DSO is known as storage place to keep cleansed and consolidated transaction or master data at lowest granularity level and this data can be analyzed using BEx query.

    A DataStore object contains key figures and charactertics fields and data from DSO can be updated using Delta update or other DataStore objects or master data. DataStore objects are commonly stored in two dimensional transparent database tables.

    25) What are the different components in DSO architecture?

    Ans:

    DSO component consists of three tables −

    • Activation Queue −

    This is used to store the data before it is activated. The key contains request id, package id and record number. Once activation is done, request is deleted from the activation queue.

    • Active Data Table

    This table is used to store current active data and this table contains the semantic key defined for data modeling.

    • Change Log −

    When you activate the object, changes to active data re stored in change log. Change log is a PSA table and is maintained in Administration Workbench under PSA tree.

    26) To access data for reporting and analysis immediately after it is loaded, which Data store object is used?

    Ans:

    DataStore object for direct update allows you to access data for reporting and analysis immediately after it is loaded. It is different from standard DSO’s in the way how it processed the data. Data is stored in same format in which it was loaded to DataStore object for direct update by the application.

    27) Explain the structure of direct update DSO’s?

    Ans:

    one table for active data and no change log area exists. Data is retrieved from external systems using API’s.

    Below API’s exists −

    • RSDRI_ODSO_INSERT: These are used to insert new data.
    • RSDRI_ODSO_INSERT_RFC: Similar to RSDRI_ODSO_INSERT and can be called up remotely.
    • RSDRI_ODSO_MODIFY: This is used to insert data having new keys.For data with keys already in the system, the data is changed.
    • RSDRI_ODSO_MODIFY_RFC: Similar to RSDRI_ODSO_MODIFY and can be called up remotely.
    • RSDRI_ODSO_UPDATE: This API is used to update existing data.
    • RSDRI_ODSO_UPDATE_RFC: This is similar to RSDRI_ODSO_UPDATE and can be called up remotely.
    • RSDRI_ODSO_DELETE_RFC: This API is used to delete the data.

    28) Can we perform Delta uploads in direct update DSO’s?

    Ans:

    As structure of this DSO contains one table for active data and no change log so this doesn’t allow delta update to InfoProviders.

    29) What is write optimized DSO’s?

    Ans:

    In Write optimized DSO, data that is loaded is available immediately for the further processing.

    30) Where do we use Write optimized DSO’s?

    Ans:

    Write optimized DSO provides a temporary storage area for large sets of data if you are executing complex transformations for this data before it is written to the DataStore object. The data can then be updated to further InfoProviders. You only have to create the complex transformations once for all data.

    Write-optimized DataStore objects are used as the EDW layer for saving data. Business rules are only applied when the data is updated to additional InfoProviders.

    Course Curriculum

    Best Advanced SAP BI/BW Training to Become An BI/BW Experts

    • Instructor-led Sessions
    • Real-life Case Studies
    • Assignments
    Explore Curriculum

    31) Explain the structure of Write optimized DSO’s? How it is different from Standard DSO’s?

    Ans:

    It only contains table of active data and there is no need to activate the data as required with standard DSO. This allows you to process the data more quickly.

    32) To perform a Join on dataset, what type of InfoProviders should be used?

    Ans:

    Infosets are defined as special type of InfoProviders where data sources contains Join rule on DataStore objects, standard InfoCubes or InfoObject with master data characteristics. InfoSets are used to join data and that data is used in BI system.

    33) What is a temporal join?

    Ans:

    Temporal Joins: are used to map a period of time. At the time of reporting, other InfoProviders handle time-dependent master data in such a way that the record that is valid for a pre-defined unique key date is used each time. You can define Temporal join that contains atleast one time-dependent characteristic or a pseudo time-dependent InfoProvider.

    34) Where do we use InfoSet in BI system?

    Ans:

    Infosets are used to analyze the data in multiple InfoProviders by combining master data charactertics, DataStore Objects, and InfoCubes.

    You can use temporal join with InfoSet to specify a particular point of time when you want to evaluate the data.

    You can use reporting using Business Explorer BEx on DSO’s without enabling BEx indicator.

    35) What are the different type of InfoSet joins?

    Ans:

    • Inner Join
    • Left Outer Join
    • Temporal Join
    • Self Join

    36) What is the use of InfoCube in BW system?

    Ans:

    InfoCube is defined as multidimensional dataset which is used for analysis in a BEx query. An InfoCube consists of set of relational tables which are logically joined to implement star schema. A Fact table in star schema is joined with multiple dimension tables.

    You can add data from one or more InfoSource or InfoProviders to an InfoCube. They are available as InfoProviders for analysis and reporting purposes.

    37) What is the structure of InfoCube?

    Ans:

    An InfoCube is used to store the data physically. It consists of a number of InfoObjects that are filled with data from staging. It has the structure of a star schema.

    In SAP BI, an Infocube contains Extended Star Schema as shown above.

    An InfoCube consists of a fact table which is surrounded by 16 dimension tables and master data that is lying outside the cube.

    38)What is the use of real time InfoCube? How do you enter data in real time InfoCubes?

    Ans:

    Real time InfoCubes are used to support parallel write access. Real time InfoCubes are used in connection with the entry of planning data.

    You can enter the data in Real time InfoCubes in two different ways −

    Transaction for entering planning data

    BI Staging

    39) How do you create a real time InfoCube in administrator workbench?

    Ans:

    A real time InfoCube can be created using Real Time Indicator check box.

    Can you make an InfoObject as info provider and why?

    Yes, when you want to report on charactertics or master data, you can make them as InfoProvider.

    40) Is it possible to convert a standard InfoCube to real time InfoCube?

    Ans:

    To convert a standard InfoCube to real time InfoCube, you have two options −

    Convert with loss of Transactional data

    Conversion with Retention of Transaction Data

    41) Can you convert an InfoPackage group into a Process chain?

    Ans:

    Yes, Double Click on the info package grp → Process Chain Maintenance button and type in the name and description.

    42) When you define aggregates, what are the available options?

    Ans:

    • H Hierarchy
    • F fixed value
    • Blank

    43) Can you setup InfoObjects as Virtual Providers?

    Ans:

    Yes.

    44) To perform a Union operation on InfoProviders, which InfoProvider is uses?

    Ans:

    MultiProvider

    45) Explain the different between Operation Data store, InfoCube and MultiProvider?

    Ans:

    ODS −

    They provide granular data, allows overwrite and data is in transparent tables, ideal for drilldown and RRI.

    InfoCube −

    This is used for star schema, we can only append data, ideal for primary reporting.

    MultiProvider −

    It contains a physical data and allow to access data from different InfoProviders.

    46) What do you understand by Start and update routine?

    Ans:

    Start Routines −

    The start routine is run for each Data Package after the data has been written to the PSA and before the transfer rules have been executed. It allows complex computations for a key figure or a characteristic. It has no return value. Its purpose is to execute preliminary calculations and to store them in global Data Structures. This structure or table can be accessed in the other routines. The entire Data Package in the transfer structure format is used as a parameter for the routine.

    Update Routines −

    They are defined at the InfoObject level. It is like the Start Routine. It is independent of the DataSource. We can use this to define Global Data and Global Checks.

    47) What is the use of Rollup?

    Ans:

    This is used to load new Data Package into the InfoCube aggregates. If we have not performed a rollup then the new InfoCube data will not be available while reporting on the aggregate.

    48) How can you achieve performance optimization in SAP Data Warehouse?

    Ans:

    During loading, perform steps in below order −

    First load the master data in the following order: First attributes, then texts, then hierarchies.

    Load the master data first and then the transaction data. By doing this, you ensure that the SIDs are created before the transaction data is loaded and not while the transaction data is being loaded.

    To optimize performance when loading and deleting data from the InfoCube −

    • Indexes
    • Aggregates
    • Line item and high Cardinality
    • Compression

    To achieve good activation performance for DataStore objects, you should note the following points −

    Creating SID Values

    Generating SID values takes a long time and can be avoided in the following cases −

    Do not set the ‘Generate SID values’ flag, if you only use the DataStore object as a data store. If you do set this flag, SIDs are created for all new characteristic values.

    If you are using line items (document number or time stamp, for example) as characteristics in the DataStore object, set the flag in characteristic maintenance to show that they are “attribute only”.

    49) What is Partition of an InfoCube?

    Ans:

    It is the method of dividing a table for report optimization. SAP uses fact file partitioning to improve performance. We can partition only at 0CALMONTH or 0FISCPER. Table partitioning helps to run the report faster as data is stored in the relevant partitions. Also table maintenance becomes easier.

    50) Explain the difference between InfoCube and ODS?

    Ans:

    Infocube is structured as star schema where a fact table is surrounded by different dim table that are linked with DIM’ids.

    ODS is a flat structure with no star schema concept and which will have granular data (detailed level). Overwrite functionality.

    Course Curriculum

    Get On-Demand SAP BI/BW Certification Course By Experts Trainers

    Weekday / Weekend BatchesSee Batch Details

    51) What is the use of Navigational attributes?

    Ans:

    Navigational attribute is used for drilling down in the report.

    52) While loading data from flat files, when separators are used inconstantly. How this will read in BI load?

    Ans:

    If separators are used inconsistently in a CSV file, the incorrect separator is read as a character and both fields are merged into one field and may be shortened. Subsequent fields are then no longer in the correct order.

    53) To load the data from a file source system, what is requirement in BI system?

    Ans:

    Before you can transfer data from a file source system, the metadata must be available in BI in the form of a DataSource.

    54) In SAP BW, is it possible to have multiple data sources have one InfoSource?

    Ans:

    Yes.

    55) How data is stored in PSA?

    Ans:

    In form of PSA tables

    56) What is the use of DB connection in SAP BW data acquisition?

    Ans:

    DB connect is used to define other database connection in addition to default connection and these connections are used to transfer data into BI system from tables or views.

    To connect an external database, you should have below information −

    • Tools
    • Source Application knowledge
    • SQL syntax in Database
    • Database functions

    57) What is UD connect in SAP BW system? How does it allow reporting in BI system?

    Ans:

    Universal data UD connect allows you to access Relational and multidimensional data sources and transfer the data in form of flat data. Multidimensional data is converted to flat format when Universal Data Connect is used for data transfer.

    UD uses J2EE connector to allow reporting on SAP and non-SAP data. Different BI Java connectors are available for various drivers, protocols as resource adapters −

    • BI ODBO Connector
    • BI JDBC Connector
    • BI SAP Query Connector
    • XMLA Connector

    58) SOME DATA IS UPLOADED TWICE INTO INFOCUBE. HOW TO CORRECT IT?

    Ans:

    But how is it possible?.If you load it manually twice, then you can delete it by request.

    59) CAN U ADD A NEW FIELD AT THE ODS LEVEL? 

    Ans:

    Sure you can.ODS is nothing but a table.

    60) CAN NUMBER OF DATASOURCE HAS ONE INFOSOURCE

    Ans:

    Yes of course.For example, for loading text and hierarchies we use different data sources but the same infosource.

    61) INFOSET QUERY.

    Ans:

    Can be made of ODSs and objects

    62) IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER STRUCTURES ARE THERE.

    Ans:

    In R/3 or in BW??.2 in R/3 and 2 in BW

    63) ROUTINES?

    Ans:

    Exist In the info object, transfer routines, update routines and start routine

    64) BRIEF SOME STRUCTURES USED IN BEX.

    Ans:

    Rows and Columns, you can create structures.

    65) WHAT ARE THE DIFFERENT VARIABLES USED IN BEX? 

    Ans:

    • Variable with default entry
    • Replacement path
    • SAP exit
    • Customer exit
    • Authorization

    66) What are the conversion routines for units and currencies in the update rule? 

    Ans:

    Time dimensions are automatically converted; Ex: if the cube contains calender month and your transfer structure contains date, the date to calender month is converted automatically.

    67) Can you make an infoobject as info provider and why? 

    Ans:

    Yes, When you want to report on characterstics or master data, you can make them as infoprovider. Ex: you can make 0CUSTMER as infoprovider and do Bex reporting on 0 CUSTOMER;right click on the infoarea and select ‘Insert characterstic as data target’.

    68) What are the parallel process that could have locking problems? 

    Ans:

    • heirachy attribute change run
    • loading master data from same infoobject; for ex: avoid master data from different source systems at the same time.
    • rolling up for the same info cube.
    • selecting deletion of info cube/ ODS and parallel loading.
    • activation or delection of ODS object when loading parallel.

    69) How would you convert a info package group into a process chain?

    Ans:

    Double Click on the info package grp, click on the ‘Process Chain Maint’ button and type in the name and descrition ; the individual info packages are inserted automatically.

    70) How do you transoform Open Hub Data? 

    Ans:

    Using BADI

    SAP BI BW Sample Resumes! Download & Edit, Get Noticed by Top Employers! Download

    71) What are the data loading tuning one can do? 

    Ans:

    • watch the ABAP code in transfer and update rules;
    • load balance on different servers
    • indexes on source tables
    • use fixed length files if u load data from flat files and put the file on the application server.
    • use content extractor
    • use PSA and data target inparallel option in the info package
    • start several info packagers parallel with different selection options
    • buffer the SID number ranges if u load lot of data at once
    • load master data before loading transaction data.

    72) What is ODS? 

    Ans:

    Operations data Source . You can overwrite the existing data in ODS.

    73) What is the use of BW Statistics?

    Ans:

    The sets of cubes delivered by SAP is used to measure performance for query, loading data etc., It also shoes the usage of aggregates and the cost associated with then.

    74) What are the options when defining aggregates? 

    Ans:

    groups according to characteristics

    • H – Hierarchy
    • F – fixed value

    Blank — none

    75) How will you debug errors with SAP GUI (like Active X error etc) 

    Ans:

    Run Bex analyzer -> Business Explorer menu item -> Installation check; this shows an excel sheet with a start button; click on it; this verifies the GUI installation; if u find any errors either reinstall or fix it.

    76) When you write user exit for variables what does I_Step do? 

    Ans:

    I_Step is used in ABAP code as a conditional check.

    77) Write the types of Multi-providers?

    Ans:

    Types of Multi-providers are:

    • Heterogeneous Multi providers: These info-providers have a few numbers of features and key figures. It can be used for the modeling of situations by separating them into sub-scenarios. Each sub-scenario is signified by its own info-provider.
    •  
    • Homogeneous Multi providers: It consists of technically identical info-providers, such as infocubes with exactly the same features and key figures.

    78)  List the differences between BW 3.5 and BI 7.0 versions.

    Ans:

    SAP BW 7.0 is called SAP BI and is one of the components of SAP NetWeave. Some of the major differences are:

    • No Update rules or Transfer rules (Not mandatory in data flow)
    • Instead of update rules and Transfer rules new concept introduced called transformations.
    • New ODS introduced in additional to the Standard and transactional.
    • ODS is renamed as DataStore to meet with the global data warehousing standards.
    • In Infosets now you can include Infocubes as well.
    • The Re-Modeling transaction helps you add new key figures and characteristics and handle historical data as well. This facility is available only for info cube.
    • The BI accelerator (for now only for infocubes) helps in reducing query run time by almost a factor of 10 – 100. This BI accl is a separate box and would cost more.
    • The monitoring has been improved with a new portal based cockpit.
    • Search functionality has improved!! You can search any object. Not like 3.5
    • transformation replaces the transfer and update rules.
    • Remodeling of InfoProviders supports you in Information Lifecycle Management.
    • Push of XML data into BI system (into PSA) without Service API or Delta Queue From BI, remote activation of Data Sources is possible in SAP source systems.
    • There are functional changes to the Persistent Staging Area (PSA).
    • BI supports real-time data acquisition.
    • SAP BW is now known formally as BI (part of NetWeaver 2004s). It implements the Enterprise Data Warehousing (EDW). The new features/ Major differences include:
    • Load through PSA has become a mandatory. You can’t skip this, and also there is no IDoc transfer method in BI 7.0. DTP (Data Transfer Process) replaced the Transfer and Update rules. Also in the Transformation now we can do “Start Routine, Expert Routine and End Routine”. during data load.
    • User management (includes new concept for analysis authorizations) for more flexible BI end user authorizations.

    79) What do you understand by system landscape? What kind of landscapes are possible with SAP Netweaver?

    Ans:

    A landscape is a logical grouping of systems. The grouping of landscape can be horizontal or vertical.

    Horizontal landscapes comprise of two or more SAP systems (system IDs – SIDs) that support “promote to production” of software for a particular piece or set of functionality – for example, the development, quality assurance and productive systems for the BI functionality is the “BI landscape”.

    Vertical landscapes comprise of the systems in a particular area of the landscape – for example, all of the systems that run productive services are the “production landscape”.

    SAP NETWEAVER BASED SYSTEM LANDSCAPE DEPLOYMENT OPTIONS

    As with any software implementation, and SAP NetWeaver based systems are no different in this case, the ideal software landscape to support the implementation is comprised of environments supporting three distinct needs that provide a solid ‘promote to production’ change management and change control process for all configuration and developments.

    These environments should provide:

    • An environment where customizing and development can be performed. The environment should be representative of the productive environment and contain all product production customizing, developments and a sampling of production data. In addition new projects’ developments, customizing and data will exist in the system and this environment will be used for unit testing. This environment is used for as the initial environment for resolution of production issues and routine maintenance support.
    • An isolated and stable environment for testing the customizing, developments, and maintenance support changes. The environment is representative of the productive environment and contains all product customizing, developments and in most cases production quality data. In addition this environment will also have newly completed customizing/developments that are in quality testing phase prior to productive release. The typical testing that occurs in this environment is regression and integration testing. No development tasks are performed in this environment, just quality assurance tasks. This environment may also be used for replicating and debugging productive issues.
    • An isolated and stable production environment. The environment is the system of record and only contains productive customizing and developments. No development tasks are performed in this environment, just productive tasks. This environment may additionally be used for debugging productive issues.
    • This ‘promote to production’ scenario is recommended when implementing any system based on SAP NetWeaver.
    • This is typically called a “Three System Landscape” with 1 Production system (PRD), 1 Quality Assurance system (QAS), and 1 Development System (DEV).
    • Many customers supplement a three system landscape with a fourth environment: A standalone sandbox environment used for destructive testing, learning, and testing. The landscape is still called a three system landscape as the sandbox is not part of the ‘promote to production’ landscape.
    • A customer can choose to combine the above environments in a more minimal 2 system landscape – this landscape is not typical for SAP deployments and the customer must manage the additional risk and challenges of separating and isolating the different environment activities from each other and maintaining a stable and productive environment.

    Furthermore customers can choose to extend the three system landscape to become a 4 system landscape. This can be appropriate for customers who have:

    1. Extensive distributed and parallel development teams,  or
    2. If the customer has the need to separate the quality assurance processes into two distinct environments.
    3. These can result in the following additions to the ‘Three System Landscape’ to create a ‘4 system landscape’:
    4. Additional Development system: An additional consolidation system is inserted into the landscape to consolidate distributed developments and customizing.
    5. Addition of an additional quality environment into the landscape and assigning specific testing needs to each of the quality environments (typically it is seen that one system performs Application and performance testing and the second is used for integration, user acceptance and regression testing).
    • Development System (DEV)

    All customizing and development work is performed in this system. All system maintenance including break-fixes for productive processes is also performed in the system. After all the changes have been unit tested, these changes can be transferred to the quality assurance system (QAS) for further system testing.

    The customizing, development and production break-fix changes are promoted to the QAS system using the change management system. This ensures consistency, management, tracking and audit capabilities thus minimizing risk and human error by eliminating manual repetition of development and customizing work in each system.

    • Quality Assurance System (QAS)

    After unit testing the customizing, development and break-fix changes in the development system (DEV), the changes are promoted to the quality assurance system (QAS). Here, the configuration, development or changes undergo further tests and checks to ensure that they do not adversely affect other modules.

    When the configuration, development or changes have been thoroughly tested in this system and signed off by the quality assurance team, it can be promoted to the production system (PRD).

    • Production System (PRD)

    A company uses the production system (PRD) for its live, productive work. This system— containing the company’s live data—is where the real business processes are executed. The other systems in the landscape provide a safe approach to guaranteeing that only correct and tested (that is not defective) new developments and/or customizing configurations get deployed into the productive system. Additionally they ensure that changes to productive developments and configuration by either project enhancements or maintenance do not adversely affect the production environment when deployed. Therefore the quality of the DEV and QAS system and the implemented change management processes directly impacts the quality of the production system.

    80) Can you give some business scenarios wherein you have used the standard Datastore Object?

    Ans:

    The diagram below shows how standard DataStore objects are used in this example of updating order and delivery information, and the status tracking of orders, meaning which orders are open, which are partially-delivered, and so on.

    There are three main steps to the entire data process:

    • Loading the data into the BI system and storing it in the PSA.

    The data requested by the BI system is stored initially in the PSA. A PSA is created for each DataSource and each source system. The PSA is the storage location for incoming data in the BI system. Requested data is saved, unchanged, to the source system.

    • Processing and storing the data in DataSource objects

    In the second step, the DataSource objects are used on two different levels.

    a.      On level one, the data from multiple source systems is stored in DataSource objects. Transformation rules permit you to store the consolidated and cleansed data in the technical format of the BI system. On level one, the data is stored on the document level (for example, orders and deliveries) and constitutes the consolidated database for further processing in the BI system. Data analysis is therefore not usually performed on the DataSource objects at this level.

    b.      On level two, transfer rules subsequently combine the data from several DataStore objects into a single DataStore object in accordance with business-related criteria. The data is very detailed, for example, information such as the delivery quantity, the delivery delay in days, and the order status, are calculated and stored per order item. Level 2 is used specifically for operative analysis issues, for example, which orders are still open from the last week. Unlike multidimensional analysis, where very large quantities of data are selected, here data is displayed and analyzed selectively.

    • Storing data in the InfoCube

    In the final step, the data is aggregated from the DataStore object on level two into an InfoCube. The InfoCube does not contain the order number, but saves the data, for example, on the levels of customer, product, and month. Multidimensional analysis is also performed on this data using a BEx query. You can still display the detailed document data from the DataStore object whenever you need to. Use the report/report interface from a BEx query. This allows you to analyze the aggregated data from the InfoCube and to target the specific level of detail you want to access in the data.

    81) Can customers carry on using SQL-based SAP HANA data warehousing?

    Ans:

    Yes, we will continue to support the SQL approach. This provides customers an option additional to SAP BW/4HANA. Which solution is best depends on the customer’s priorities, level of expertise, and the model determining their requirements. There is no right or wrong answer. Many customers find that a hybrid approach best meets their needs.

    82) Will SAP BW/4HANA have the same capabilities as both the SAP BW 7.5 on SAP HANA releases?

    Ans:

    SAP BW/4HANA has similar functions as SAP BW powered by SAP HANA. But SAP BW/4HANA processes only SAP HANA-optimized objects, which cuts modelling effort and the overall complexity of SAP BW/4HANA. Following the SAP S/4HANA road map, within six months SAP BW/4HANA would differ significantly from SAP BW 7.5 powered by SAP HANA. SAP BW/4HANA is not valid for other database platforms.

    83) Why should customers switch to SAP BW/4HANA if they have just upgraded to SAP BW 7.5?

    Ans:

    The main benefits of SAP BW/4HANA for SAP BW on SAP HANA customers are its simplicity, openness, advanced UIs, and performance. As the road map shows, we plan to enhance the solution considerably in the three releases in the quarters ahead.

    84) How do SAP BW/4HANA and SAP S/4HANA relate?

    Ans:

    SAP BW/4HANA is completely independent from SAP S/4HANA. Customers do not have to implement one to use the other. SAP S/4HANA Analytics can be used for operational reporting on data taken straight from S/4HANA systems. SAP BW/4HANA on the other hand is an advanced data warehouse that can be used to create reports on current, historical, and external data from a multitude of SAP and non-SAP sources.

    85) Is SAP BW/4HANA only for customers already on SAP?

    Ans:

    No, it can be used for any data warehousing needs, for data from SAP and non-SAP sources alike.

    86) Which analytics tools can be used with SAP BW/4HANA?

    Ans:

    SAP BW/4HANA supports SAP BusinessObjects Cloud and SAP BusinessObjects BI. It also has interfaces open to third-party providers.

    87) What Is Ad Hoc Analysis?

    Ans:

    In traditional data warehouses, such as SAP BW, a lot of pre-aggregation is done for quick results. That is the administrator (IT department) decides which information might be needed for analysis and prepares the result for the end-users. This results in fast performance but the end-user does not have flexibility.

    The performance reduces dramatically if the user wants to do analysis on some data that is not already pre-aggregated. With SAP HANA and its speedy engine, no pre-aggregation is required. The user can perform any kind of operations in their reports and does not have to wait for hours to get the data ready for analysis.

    88) Mention What Is The Role Of The Transaction Manager And Session?

    Ans:

    The transaction manager coordinates database transactions and keeps a record of running and closed transactions. When a transaction is rolled back or committed, the transaction manager notifies the involved storage engines about the event so they can run necessary actions. 

    89) Can You Disable Cache?

    Ans:

    Yes, either globally or using query debug tool RSRT.

    90) Why we delete the setup tables (LBWG) & fill them (OLI*BW)?

    Ans:

    A) Initially we don’t delete the setup tables but when we do change in extract structure we go for it. We r changing the extract structure right, that means there are some newly added fields in that which r not before. So to get the required data ( i.e.; the data which is required is taken and to avoid redundancy) we delete n then fill the setup tables.  

    To refresh the statistical data. The extraction set up reads the dataset that you want to process such as, customers orders with the tables like VBAK, VBAP) & fills the relevant communication structure with the data. The data is stored in cluster tables from where it is read when the initialization is run. It is important that during initialization phase, no one generates or modifies application data, at least until the tables can be set up. 

    91) HOW MANY LEVELS YOU CAN GO IN REPORTING? 

    Ans:

    You can drill down to any level by using Navigational attributes and jump targets. 

    92) WHAT ARE INDEXES? 

    Ans:

    Indexes are data base indexes, which help in retrieving data fastly. 

    93) Why is there a DataSource with ‘0’ records in RSA7 if delta exists and has also been loaded successfully? 

    Ans:

    It is most likely that this is a DataSource that does not send delta data to the BW System via the delta queue but directly via the extractor (delta for master data using ALE change pointers). Such a DataSource should not be displayed in RSA7. This error is corrected with BW 2.0B Support Package 11. 

    94) IS IT NESSESARY TO INITIALIZE EACH TIME THE DELTA UPDATE IS USED? 

    Ans:

    No. 

    95) What are the X & Y Tables?

    Ans:

    X-table = A table to link material SIDs with SIDs for time-independent navigation attributes.

    Y-table = A table to link material SIDs with SIDS for time-dependent navigation attributes.

    There are four types of sid tables 

    X time independent navigational attributes sid tables

    Y time dependent navigational attributes sid tables 

    H hierarchy sid tables

    I hierarchy structure sid tables 

    96) What does the number in the ‘Total’ column in Transaction RSA7 mean?  

    Ans:

    The ‘Total’ column displays the number of LUWs that were written in the delta queue and that have not yet been confirmed. The number includes the LUWs of the last delta request (for repetition of a delta request) and the LUWs for the next delta request. A LUW only disappears from the RSA7 display when it has been transferred to the BW System and a new delta request has been received from the BW System. 

    97) How to know in which table (SAP BW) contains Technical Name / Description and creation data of a particular Reports. Reports that are created using BEx Analyzer.

    Ans:

    There is no such table in BW if you want to know such details while you are opening a particular query press properties button you will come to know all the details that you wanted. 

    You will find your information about technical names and description about queries in the following tables. Directory of all reports (Table RSRREPDIR) and Directory of the reporting component elements (Table RSZELTDIR) for workbooks and the connections to queries check Where- used list for reports in workbooks (Table RSRWORKBOOK) Titles of Excel Workbooks in InfoCatalog (Table RSRWBINDEXT) 

    98) What is a LUW in the delta queue?

    Ans:

    A LUW from the point of view of the delta queue can be an individual document, a group of documents from a collective run or a whole data packet of an application extractor. 

    99) Why does the number in the ‘Total’ column in the overview screen of Transaction RSA7 differ from the number of data records that is displayed when you call the detail view? 

    Ans:

    The number on the overview screen corresponds to the total of LUWs (see also first question) that were written to the qRFC queue and that have not yet been confirmed. The detail screen displays the records contained in the LUWs. Both, the records belonging to the previous delta request and the records that do not meet the selection conditions of the preceding delta init requests are filtered out. Thus, only the records that are ready for the next delta request are displayed on the detail screen. In the detail screen of Transaction RSA7, a possibly existing customer exit is not taken into account.  

    100) Why does Transaction RSA7 still display LUWs on the overview screen after successful delta loading? 

    Ans:

    Only when a new delta has been requested does the source system learn that the previous delta was successfully loaded to the BW System. Then, the LUWs of the previous delta may be confirmed (and also deleted). In the meantime, the LUWs must be kept for a possible delta request repetition. In particular, the number on the overview screen does not change when the first delta was loaded to the BW System. 

    101) Why are selections not taken into account when the delta queue is filled? 

    Ans:

    Filtering according to selections takes place when the system reads from the delta queue. This is necessary for reasons of performance. 

    Are you looking training with Right Jobs?

    Contact Us
    Get Training Quote for Free