Thursday, February 23, 2017

Oracle Endeca ITL and Expereince Manager High Availability Architecture

What are High Availability and Failover   High availability  as the ability for the system to continue functioning after the failure of... thumbnail 1 summary
  • What are High Availability and Failover

    •  High availability as the ability for the system to continue functioning after the failure of one or more of the servers.
    • Failover as one possible implementation of high availability. Failover is defined as the ability for client connections to migrate from one server to another in the event of server failure so client applications can continue to operate.

    How to Setup ITL and XM Failover

    Two Information Transformation Layer (ITL) servers which host Tools and Framework required achieving high availability and another for Disaster recovery (DR) if required. 
    Only one ITL can be in active mode and remaining failover/backup should be passive mode . As per architecture, specific, in the diagram below, folders should be shared between primary and backup ITL servers using shared drive. DR generally hosted in the different data center so file system replication or other technique required for DR environment. The Shared drive can be setup during product and app installation.
    Following points needs to be taken care of during installation
    • Installation path should be same in all environments to avoid any conflicts and issues.
    • Endeca Application Initialize services script required to run from all the ITL servers for the first time.
    • Make sure to copy same Dgraph cluster XML with same naming convention to avoid duplicate Dgraph instance
    Find out block diagram below for HA understanding and setup

    What needs to be shared between active and passive ITL servers:-

    1.  Tools and Framework state folder
    2. Endeca application Data folder
    3. CAS workspace shared folder

    What happens if failover happens during mid-process 

    • Failover ITL would have to begin the process again if primary ITL Dgidx failed from Dgidx process.
    • Failover ITL would have to begin the entire process again if primary ITL forge failed.
    • Failover ITL would have to begin the entire process again if primary CAS indexing  failed.
    Note :  ATG properties need to be changed to redirect indexing to failover ITL in the event of primary ITL fault, but best practice would probably be to restart the indexing process again.
    For more Endeca related information visit:  http://www.ajayagrwal.com/

    Monday, January 2, 2017

    Endeca Basics : Interview Questions and Answers- Part1

    Endeca Basics - Interview Questions - Part-1 1. What is Endeca and why do you use Endeca instead of other search products? 2. What are ... thumbnail 1 summary

  • Endeca Basics - Interview Questions - Part-1

    1. What is Endeca and why do you use Endeca instead of other search products?
    2. What are dimensions and properties? 
    3. What is difference between single and multi dimension and how to create?
    4. Can same property used for dimension and properties if possible than how?
    5. What is Hierarchical Dimension and how can you create?
    6. How can you get complete hierarchy of hierarchical dimension using single query?
    7. What is Dimension search and is it different from Record search?
    8. Which Endeca query Parameters and value gives all Endeca Records from MDEX Engine?
    9. is it possible to get Record and Dimension search using single query?
    10. What are different Types of Endeca Query Available?
    11. is it possible to store some properties inside dimension and how? Where do you use this scenario?
    12. What are different types of Dimension available in Endeca?
    13. What is difference between Internal and External Dimension and where dimension values gets stored(File)?
    14. How to modify the Dimension ID?
    15. How to remove some Dimension value(not used) from Endeca Experience manager?


    Reply Your Answers to Comment Section. This would help others to understand Endeca Basics. I will consolidate the reply and post the answers. Will post another set of Questions soon.


    Monday, December 19, 2016

    Endeca 11.x : Unable to retrieve site definition for site id

    Behavior :-  Following Exception/Error comes during accessing the Endeca pages SEVERE: Unable to retrieve site definition for site id: ... thumbnail 1 summary

  • Behavior :- Following Exception/Error comes during accessing the Endeca pages

    SEVERE: Unable to retrieve site definition for site id: /storeSiteUS
    com.endeca.store.exceptions.PathNotFoundException: No node found at path: [pages].
            at com.endeca.store.configuration.InternalNode.getNode(InternalNode.java:153)
            at com.endeca.store.configuration.InternalNode.getNodeInfo(InternalNode.java:221)
            at com.endeca.store.configuration.InternalNode.getNode(InternalNode.java:150)
            at com.endeca.store.configuration.InternalNode.getNode(InternalNode.java:61)
            at com.endeca.infront.site.SiteManager.getSite(SiteManager.java:147)
            at atg.endeca.assembler.multisite.SiteStateParserImpl.parseSiteState(SiteStateParserImpl.java:94)
            at com.endeca.infront.site.SiteStateBuilder.createSiteState(SiteStateBuilder.java:110)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:606)
            at atg.nucleus.factory.instance.MethodInstanceFactory.createInstance(MethodInstanceFactory.java:303)
            at atg.nucleus.PropertyConfiguration.loadFromInstanceFactory(PropertyConfiguration.java:697)
            at atg.nucleus.PropertyConfiguration.createNewInstance(PropertyConfiguration.java:780)
            at atg.nucleus.PropertyConfiguration.createNewInstance(PropertyConfiguration.java:741)
            at atg.nucleus.NucleusNameResolver.createNewInstance(NucleusNameResolver.java:1619)
            at atg.nucleus.MultiRootNameResolver.createFromName(MultiRootNameResolver.java:833)
            at atg.nucleus.MultiRootNameResolver.resolveName(MultiRootNameResolver.java:455)
            at atg.nucleus.ConfigurationRef.getValue(ConfigurationRef.java:119)
            at atg.nucleus.SimpleComponentState.setBeanProperty(SimpleComponentState.java:406)
            at atg.nucleus.SimpleConfigurationState.saveToBean(SimpleConfigurationState.java:240)
            at atg.nucleus.SimpleConfigurationState.configureBean(SimpleConfigurationState.java:263)
            at atg.nucleus.BeanConfigurator.configureBean(BeanConfigurator.java:297)
            at atg.nucleus.PropertyConfiguration.configureService(PropertyConfiguration.java:1055)
            at atg.nucleus.MultiRootConfiguratorImpl.configureService(MultiRootConfiguratorImpl.java:103)
            at atg.nucleus.MultiRootNameResolver.configureService(MultiRootNameResolver.java:1116)
            at atg.nucleus.MultiRootNameResolver.configureAndStartService(MultiRootNameResolver.java:1195)
            at atg.nucleus.MultiRootNameResolver.bindAndConfigureService(MultiRootNameResolver.java:959)
            at atg.nucleus.MultiRootNameResolver.createFromName(MultiRootNameResolver.java:849)
            at atg.nucleus.MultiRootNameResolver.resolveName(MultiRootNameResolver.java:455)
            at atg.nucleus.MultiRootNameResolver.resolveName(MultiRootNameResolver.java:1064)
            at atg.nucleus.factory.instance.MethodInstanceFactory.getMethodToInvoke(MethodInstanceFactory.java:218)
            at atg.nucleus.factory.instance.MethodInstanceFactory.createInstance(MethodInstanceFactory.java:290)
            at atg.nucleus.PropertyConfiguration.loadFromInstanceFactory(PropertyConfiguration.java:697)
            at atg.nucleus.PropertyConfiguration.createNewInstance(PropertyConfiguration.java:780)
            at atg.nucleus.PropertyConfiguration.createNewInstance(PropertyConfiguration.java:741)
            at atg.nucleus.NucleusNameResolver.createNewInstance(NucleusNameResolver.java:1619)
            at atg.nucleus.MultiRootNameResolver.createFromName(MultiRootNameResolver.java:833)
            at atg.nucleus.MultiRootNameResolver.resolveName(MultiRootNameResolver.java:455)
            at atg.nucleus.ConfigurationRef.getValue(ConfigurationRef.java:119)

            at atg.nucleus.PropertyConfiguration.loadFromInstanceFactory(PropertyConfiguration.java:697)
            at atg.nucleus.PropertyConfiguration.createNewInstance(PropertyConfiguration.java:780)
            at atg.nucleus.PropertyConfiguration.createNewInstance(PropertyConfiguration.java:741)
            at atg.nucleus.NucleusNameResolver.createNewInstance(NucleusNameResolver.java:1619)
            at atg.nucleus.MultiRootNameResolver.createFromName(MultiRootNameResolver.java:833)
            at atg.nucleus.MultiRootNameResolver.resolveName(MultiRootNameResolver.java:455)
            at atg.nucleus.ConfigurationRef.getValue(ConfigurationRef.java:119)
            at atg.nucleus.SimpleComponentState.setBeanProperty(SimpleComponentState.java:406)
            at atg.nucleus.SimpleConfigurationState.saveToBean(SimpleConfigurationState.java:240)
            at atg.nucleus.SimpleConfigurationState.configureBean(SimpleConfigurationState.java:263)
            at atg.nucleus.BeanConfigurator.configureBean(BeanConfigurator.java:297)
            at atg.nucleus.PropertyConfiguration.configureService(PropertyConfiguration.java:1055)
            at atg.nucleus.MultiRootConfiguratorImpl.configureService(MultiRootConfiguratorImpl.java:103)
            at atg.nucleus.MultiRootNameResolver.configureService(MultiRootNameResolver.java:1116)
            at atg.nucleus.MultiRootNameResolver.configureAndStartService(MultiRootNameResolver.java:1195)
            at atg.nucleus.MultiRootNameResolver.bindAndConfigureService(MultiRootNameResolver.java:959)
            at atg.nucleus.MultiRootNameResolver.createFromName(MultiRootNameResolver.java:849)
            at atg.nucleus.MultiRootNameResolver.resolveName(MultiRootNameResolver.java:455)
            at atg.nucleus.ResolveNameHelperImpl.resolveName(ResolveNameHelperImpl.java:274)
            at atg.servlet.DynamoHttpServletRequest.resolveNameSingleNucleus(DynamoHttpServletRequest.java:3898)
            at atg.servlet.DynamoHttpServletRequest.resolveName(DynamoHttpServletRequest.java:3857)
            at atg.servlet.DynamoHttpServletRequest.resolveName(DynamoHttpServletRequest.java:3983)
            at atg.endeca.assembler.NucleusAssembler.resolveHandler(NucleusAssembler.java:182)


    Solution :-  Follow the steps as mentioned in my one of the previous post to solve above error.

    Unable to retrieve site definition for site id

    From Author
    Was this post resolve the solution you are looking for? are you looking for some other issues?Provide your comments

    Sunday, December 18, 2016

    How to import specific set of rules/content to Endeca Experience manager?

    Oracle Endeca 11.1 onward support to import specific content item to Endeca Experience manager via calling ECR repository. This features e... thumbnail 1 summary

  • Oracle Endeca 11.1 onward support to import specific content item to Endeca Experience manager via calling ECR repository. This features enable to import content generated by CMS solutions to XM using scheduled job after extending/customize script to support XM supported xml/json format.

    This can be achieved using runcommand utility script to import specific/single content item.

    Here is the command

    <<Endeca_App_Path>>/Discover/control/runcommand.sh IFCR importContent "content/Shared/banner/Hero Banner" "<<Content_full_Path>>/content/Shared/banner/Hero Banner"

    Above command will import Hero Banner content to XM. There are other utilty available as well to import and export content.
    From Author
    Was this post resolve the solution you are looking for? are you looking for some other issues?Provide your comments

    Wednesday, December 14, 2016

    How to add user segment from ATG to Endeca witout creating segment in ATG?

    ATG passes all the user segments to endeca query based on profile to tigger different rule or content. There are some situations where som... thumbnail 1 summary

  • ATG passes all the user segments to endeca query based on profile to tigger different rule or content. There are some situations where some user segment does not exist in profile but required for endeca query to trigger different types of content.

    Example :- Show different banner if user is coming from affiliate A and show different banner if user is coming from affiliate B.

    This can be achived using below code to add user sgament based on different business logic

    public class ExtendNavigationStateProcessor implements NavigationStateProcessor {
      private UserState mUserState = null;
      @Override
    public void process(NavigationState pNavigationState) {

     if(Condition-1){
    getUserState().addUserSegments("Affliate-A");
    }
    else{
    getUserState().addUserSegments("Affliate-B");
    }
    }

    public UserState getUserState() {
    return mUserState;
    }

    public void setUserState(UserState pUserState) {
    mUserState = pUserState;
    }

    }


    From Author
    Was this post resolve the solution you are looking for? are you looking for some other issues?Provide your comments

    Wednesday, November 9, 2016

    What the different available deployment templates in Endeca?

    Oracle Provides three different types of deployment template to create Endeca Applications. CAS-Forge based deployment template   -  Thi... thumbnail 1 summary

  • Oracle Provides three different types of deployment template to create Endeca Applications.

    • CAS-Forge based deployment template This was being used until version 11.1. This type of application reads data from CAS record store and push data into Endeca forge process to create Endeca records.
    • Legacy Forge Based deployment template This template is used for pure endeca legacy/old implementation and provide indexing capability to read data from any source and do multiple joins in pipeline process.
    • CAS-Forge based deployment template - This template is introduced from 11.1 version onwards. This process reads the all data from record store including dimension value. This uses only CAS to transform the data and reduce index creation time. This process has major limitation as only provide the switch join between different data source in Endeca.
    From Author
    Was this post resolve the solution you are looking for? are you looking for some other issues?Provide your comments

    Sunday, May 1, 2016

    Oracle Endeca 11.x : How to Migrate Endeca Application between environment?

    As part of the development/debugging phase, Endeca Application needs to be migrated from QA to UAT... thumbnail 1 summary

  • As part of the development/debugging phase, Endeca Application needs to be migrated from QA to UAT or Production to Staging.

    Following items need to be migrated between environments. Find out the description below for each item.

    • Record Store Record store contains all indexable records(Product catalog, content, record store etc.) and dimension information.
    • Workbench application Business creates content using Endeca Experience manager and gets stored in ECR(Endeca Configuration Repository). This application contains XM content, cartridges and other configuration like thesaurus etc.
    • Index-config - Index-config contains all properties, dimensions, precedence rules and dictionary related configuration in JSON format.
      • Dimension Value Id Manager - Dimension Value Id manager contains Endeca auto generated dimension values. Endeca N-value can be different if this does not copy to the environment.
      • Endeca Pipeline -  This Folder contains Endeca Application indexing configuration XML including properties, dimensions, search etc.
      • Editors Config This Folder contains configuration file to make a call to MDEX to show appropriate data in Experience manager cartridges.
      productdataservice.json Snapshot
      productdataservice.json

      As part of the migration process, the application needs to be exported first and the same set of files can be used to import into another environment.

      Export Endeca Application steps 

      1. Record Store - Click here to get more details. This Step can be optional if baseline triggers using /atg/commerce/endeca/index/ProductCatalogSimpleIndexingAdmin Component

      2. Dimension Value Id Manager - Navigate To CAS Bin folder and run following commands to export
      CAS_Installation_Path/CAS/11.1.0/bin->./cas-cmd.[bat|sh] exportDimensionValueIdMappings -m <<dimension-value-id-manager name>> -f /home/atg/dimvalid.csv

      3. Endeca Pipeline - Copy Endeca_App/config/mdex folder

      4. Editors Config - Copy Endeca_App/config/ifcr/configuration/tools/xmgr folder

      5. Workbench Application - Navigate To Endeca Application control folder and run following commands to export content
             <<Endeca_App>>/control->./runcommand.[bat|sh] IFCR exportApplication </path to generate file>
      This would generate the zip/exploded folder.


      6. Index-Config - Navigate To Endeca Application control folder and run following commands to export index-config.json
           <<Endeca_APP_PATH>>/control->./index_config_cmd.[bat|sh] get-config -o all-File_Path/index-config.json



      Import Endeca Application steps - Steps should be in the following order.

      1. Record Store Click here to get more details. This Step can be optional if baseline triggers using /atg/commerce/endeca/index/ProductCatalogSimpleIndexingAdmin Component.

      2. Dimension Value Id Manager - Run below command to import dimension values
      <CAS_Installation_Path>/CAS/11.1.0/bin->./cas-cmd.[bat|sh] importDimensionValueIdMappings -m <<dimension-value-id-manager name>> -f /home/atg/dimvalid.csv

      3. Endeca Pipeline - Use the backup of <<Endeca_App>>/config/mdex folder and move pipeline.

      4. Editors ConfigCopy editors_config from step 5 and change host and post information based on environment under each json file and run the following command
               <<Endeca_App>>/control -> set_editors_config.[bat|sh]

      5. Index-Config - Run below command to import index-config.json
      <<Endeca_APP_PATH>>/control->./index_config_cmd.[bat|sh] set-config -o all -f File_path/index-config.json


      6. Endeca Baseline Update-  Run below Command

            <<Endeca_App>>/control -> ./baseline_update.[bat|sh]

      7. Workbench Content - Run below command to import workbench content
              <<Endeca_APP_PATH>>/control->./runcommand.sh IFCR importApplication <<export_zip_file/folder_path>>

      8. Promote Content - Run the following command to promote content from Authoring to Live. Promote content can be run the Workbench as well.
               <<Endeca_App>>/control -> ./promote_content.[bat|sh]

        From Author
        Was this post resolve the solution you are looking for? are you looking for some other issues?Provide your comments

        Text Widget