欢迎光临
我们一直在努力

chx用什么配置SA8650 camx pipeline node xml 配置信息

目录

usecases

pipeline

node

UsecaseAuto::Initialize

1. Initialize metadata Manager and initialize input client

2. Get the default matching usecase for the stream combination

3. Create pipeline and assign all pipeline parameters

4. Create session

5. Register metadata clients

6. get extension module instance for pipeline activation  pExtensionModuleInstance->ActivatePipeline

UsecaseSelector::DefaultMatchingUsecaseSelection 获取对应 Usecaseid 对应的usecase数组

1、对分辨率进行判断

   1)、获取 Usecase 的streamConfigMode

    2)、获取Usecase 的 pChiUsecases 对数组UsecaseAuto_Targets中的        UsecaseAuto_TARGET_BUFFER_RAW1_target 的 ChiTarget BufferDimension 中的分辨进行判断 ,再根据 ChiStreamFormat UsecaseAuto_TARGET_BUFFER_RAW1_formats 与 pStreamConfig比较判断使用哪一个格式

2、如果有rawStream enable PruneRawTargetStrings

3、如果是YUV的stream enable PruneYUVTargetStrings

4、如果是PruneUBWCTp10的数据流 enble PruneUBWCTp10TargetStrings

5、如果是PreviewStreams enable PrunePreviewTargetStrings

6、确认m_enableAutoNoIPE是否配置

7、确认缩放裁剪是否enable

8、UsecaseMatches(g_UsecaseNameToUsecaseInstanceMap.at("g_pUsecaseAuto")))

   解析g_SocIdNameToFunctionPointerMap获取pSelectedUsecase数组

pipeline 创建成功,Node:BPSIPEJPEGJPEG AGRREGATORLinks 

 Pipeline::CreateDescriptor

 1、初始化m_pipelineDescriptor pipelineCreateData

  2、解析ChiNode 中的node info 信息

   1、pNodeProperties[i].pValues中保存的是支持的node 节点支持的算法信息com.qti.stats.pdlibwrapper com.qti.hvx.addconstant

   2、获取HDR 模式信息 m_HDRInfo[logicalCameraId]

   3、如果usecases(torch widget, AON),那么 IsNoBufferUsecase=true

   4、当前场景中没使用torch widget, AON, auto usecase的 IsNoBufferUsecase = FALSE

  5、获取帧率,判断是否支持HDR 模式

  6、获取pLogicalCameraInfo 的m_cameraCaps.numSensorModes pSensorModeInfo

   7、根据pLogicalCameraInfo判断支持那种模式的HDR

8、判断是否支持裁剪 根据设置传感器模式 设置分辨率


apps/qnx_ap/AMSS/multimedia/qcamera/camera_qcx/cdk_qcx/oem/qcom/topology/titan/sa8650/

sa8650_usecase.xml

apps/qnx_ap/AMSS/multimedia/qcamera/camera_qcx/cdk_qcx/oem/qcom/topology/titan/usecase-components/usecases/UsecaseAuto/pipelines

camxAutoYUV.xml

引用RealTimeBaseAuto &StatsSegmentAuto node算法

segments/sa8650/

RealTimeBaseAuto.xml

com.qti.hvx.addconstant &com.qti.stats.pdlibwrapper 算法

在camx中ExtensionModule加载时初始化获取m_platformID

ExtensionModule::ExtensionModule()

m_platformID = SocUtils::GetSocId();

 ChiPopulatePipelineData pFuncPopulatePipelineData =
        reinterpret_cast<ChiPopulatePipelineData>(ChxUtils::LibGetAddr(m_chiUsecaseHandle, "PopulatePipelineData"));

    if (NULL != pFuncPopulatePipelineData)
   {
       pFuncPopulatePipelineData(m_platformID);
    }
    else
    {
       CHX_LOG_ERROR("Failed to load PopulatePipelineData lib");
    }
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/// PopulatePipelineData
///
/// @brief  Populate the global map variable with correct data from usecase xml generated file based on socId
///
/// @param  socID             [IN]  SocId for current target
///
/// @return None
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
VOID UsecaseSelector::PopulatePipelineData(SocId socId)

    else
    {
        CHX_LOG_ERROR("Failed to load pFuncPopulatePipelineData lib");
    }
}

获取对应平台的g_SocIdNameToFunctionPointerMap 配置

////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// PopulatePipelineData
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
extern "C" CAMX_VISIBILITY_PUBLIC  VOID PopulatePipelineData(
    SocId socId)
{
    pFunc pPopulateUseCaseInfo = NULL;
    FillMapdata();

    switch (socId)
    {
        case SocId::SM8450:
            pPopulateUseCaseInfo = g_SocIdNameToFunctionPointerMap.at("sm8450");
            break;
        case SocId::SA8650P:
            pPopulateUseCaseInfo = g_SocIdNameToFunctionPointerMap.at("sa8650");
            break;
        case SocId::SA8630P:
            pPopulateUseCaseInfo = g_SocIdNameToFunctionPointerMap.at("sa8630");
            break;
        default:
            break;
    }
    if (NULL != pPopulateUseCaseInfo)
    {
        pPopulateUseCaseInfo();
    }
    else
    {
        CHX_LOG_ERROR("Error Failed to populate pipleine data");
    }
}

cdk_qcx/oem/qcom/chiusecase/common/g_pipelines.cpp 

void FillMapssa8650()

apps/qnx_ap/AMSS/multimedia/qcamera/camera_qcx/cdk_qcx/oem/qcom/chiusecase/auto/chxusecaseselector.cpp

chxusecaseselector.cpp:

解析pStreamConfig 判断pipeline 的配置情况 && 解析g_SocIdNameToFunctionPointerMap 参数

1. Initialize metadata Manager and initialize input client

2. Get the default matching usecase for the stream combination

3. Create pipeline and assign all pipeline parameters

Pipeline::Create m_pPipelines[index]->GetDescriptorMetadata

result = m_pPipelines[index]->CreateDescriptor =Pipeline::CreateDescriptor

4. Create session

   Session::Create

5. Register metadata clients

   pPipeline->SetMetadataClientId(m_metadataClients[index])

6. get extension module instance for pipeline activation
  pExtensionModuleInstance->ActivatePipeline

UsecaseSelector::DefaultMatchingUsecase(pStreamConfigPerPipeline, 0);调用各类平台的cdk_qcx/oem/qcom/chiusecase/(platform)/chxusecaseselector.cpp

GetDefaultMatchingUsecase获取usecase信息最终会调用到DefaultMatchingUsecaseSelection

DefaultMatchingUsecaseSelection根据pStreamConfig的分辨率 、格式 operation_mode选择对应支持的usecase

Pipeline::Create 创建pipe

////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// UsecaseSelector::DefaultMatchingUsecase
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
ChiUsecase* UsecaseSelector::DefaultMatchingUsecase(
    camera3_stream_configuration_t* pStreamConfig,
    UINT32                          bpp)

    else
    {
        CHX_LOG_INFO("ChiusecaseSelector able to load handle lib %p", handle);
    }
    ChiUsecaseSelector pFuncChiUsecaseSelector =
                 reinterpret_cast<ChiUsecaseSelector>(ChxUtils::LibGetAddr(handle, "GetDefaultMatchingUsecase"));
    if (NULL == pFuncChiUsecaseSelector)
    {
        CHX_LOG_ERROR("Failed to load pFuncChiUsecaseSelector lib");
    }
    else
    {
        pSelectedUsecase = pFuncChiUsecaseSelector(pStreamConfig, bpp);
    }
    return pSelectedUsecase;
}

1、对分辨率进行判断

   1)、获取 Usecase 的streamConfigMode
    2)、获取Usecase 的 pChiUsecases 对数组UsecaseAuto_Targets中的        UsecaseAuto_TARGET_BUFFER_RAW1_target 的 ChiTarget BufferDimension 中的分辨进行判断 ,再根据 ChiStreamFormat UsecaseAuto_TARGET_BUFFER_RAW1_formats 与 pStreamConfig比较判断使用哪一个格式

2、如果有rawStream enable PruneRawTargetStrings

3、如果是YUV的stream enable PruneYUVTargetStrings

4、如果是PruneUBWCTp10的数据流 enble PruneUBWCTp10TargetStrings

5、如果是PreviewStreams enable PrunePreviewTargetStrings

6、确认m_enableAutoNoIPE是否配置

7、确认缩放裁剪是否enable

8、UsecaseMatches(g_UsecaseNameToUsecaseInstanceMap.at("g_pUsecaseAuto")))

   解析g_SocIdNameToFunctionPointerMap获取pSelectedUsecase数组


//    UsecaseSelector::DefaultMatchingUsecaseSelection
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
extern "C" CAMX_VISIBILITY_PUBLIC ChiUsecase* UsecaseSelector::DefaultMatchingUsecaseSelection(
    camera3_stream_configuration_t* pStreamConfig,
    UINT32                          bpp)

    else

                 //pStream->format
                if (UsecaseSelector::IsRaw16Stream(pFwkStream))

                //pStream->stream_type & pStream->format
                if (UsecaseSelector::IsYUVOutStream(pFwkStream))
                {
                    numYUVStreams++;
                }
                if (UsecaseSelector::IsPreviewStream(pFwkStream))
                {
                    numPreviewStreams++;
                }
                if (UsecaseSelector::IsUBWCTP10Stream(pFwkStream))
                {
                    numUBWCStreams++;
                }
                if (UsecaseSelector::IsP01208Stream(pFwkStream))
                {
                    isP01208Stream = TRUE;
                }
                if (UsecaseSelector::IsP01210Stream(pFwkStream))
                {
                    isP01210Stream = TRUE;
                }
                if (UsecaseSelector::IsP010Stream(pFwkStream))
                {
                    isP010Stream = TRUE;
                }
                if (UsecaseSelector::IsP01208LSBStream(pFwkStream))
                {
                    isP01208LSBStream = TRUE;
                }
                if (UsecaseSelector::IsP01210LSBStream(pFwkStream))
                {
                    isP01210LSBStream = TRUE;
                }
                if (UsecaseSelector::IsP010LSBStream(pFwkStream))
                {
                    isP010LSBStream = TRUE;
                }
                if (UsecaseSelector::IsRGBIStream(pFwkStream))
                {
                    isRGBIStream = TRUE;
                }
                if (UsecaseSelector::IsRGBPStream(pFwkStream))
                {
                    isRGBPStream = TRUE;
                }
                if (UsecaseSelector::IsRawPlain1612Stream(pFwkStream))
                {
                    isRawPlain1612Stream++;
                }
            }
            else
            {
                numMetaStreams++;
            }
        }

        auto AddSetting = [&pruneSettings, &variants](const CHAR* pGroup, const CHAR* pType) -> VOID

                else

            }
            else
            {
                CHX_LOG_WARN("Invalid Prune Setting - Group: %s(૎tting: %s(ස4;, pGroup, group, pType, type);
            }
        };

        auto UsecaseMatches = [&](const ChiUsecase* const pUsecase) -> BOOL
        ;

        // Pruning raw streams
        for (UINT32 i = 0; i < MaxRawStreamsPerPipeline; i++)

        // Pruning for auto YUV targets
        for (UINT32 i = 0; i < MaxYuvStreamsPerPipeline; i++)

        for (UINT32 i = 0; i < MaxUBWCStreamsPerPipeline; i++)

        // Pruning Preview Streams
        for (UINT32 i = 0; i < MaxPreviewStreamsPerPipeline; i++)

        // get override settings
        //确认m_enableAutoNoIPE是否配置
        isNoIPEEnabled = ExtensionModule::GetInstance()->EnableAutoNoIPEpipeline();

        CHX_LOG_INFO("numPreviewStreams %d, numUBWCStreams %d, numYUVStreams %d, numRawStreams %d",
            numPreviewStreams, numUBWCStreams, numYUVStreams, numRawStreams);

        AddSetting("Raw16",   (TRUE == isRaw16Stream)   ? "Enabled" : "Disabled");
        AddSetting("Plain16_12",   (TRUE == isRawPlain1612Stream)   ? "Enabled" : "Disabled");
        AddSetting("NoIPE",   (TRUE == isNoIPEEnabled)  ? "Enabled" : "Disabled");
        AddSetting("AWB",     (TRUE == ExtensionModule::GetInstance()->EnableAutoAWB()) ? "Enabled" : "Disabled");
        AddSetting("AEC",     (TRUE == ExtensionModule::GetInstance()->EnableAutoAEC()) ? "Enabled" : "Disabled");
        AddSetting("P01208Format", (TRUE == isP01208Stream)  ? "Enabled" : "Disabled");
        AddSetting("P01210Format", (TRUE == isP01210Stream)  ? "Enabled" : "Disabled");
        AddSetting("P010Format",   (TRUE == isP010Stream)    ? "Enabled" : "Disabled");
        AddSetting("P01208LSBFormat", (TRUE == isP01208LSBStream)  ? "Enabled" : "Disabled");
        AddSetting("P01210LSBFormat", (TRUE == isP01210LSBStream)  ? "Enabled" : "Disabled");
        AddSetting("P010LSBFormat",   (TRUE == isP010LSBStream)    ? "Enabled" : "Disabled");
        AddSetting("RGBIFormat",    (TRUE == isRGBIStream)     ? "Enabled" : "Disabled");
        AddSetting("RGBPFormat",    (TRUE == isRGBPStream)     ? "Enabled" : "Disabled");

        for (UINT32 i = 0; i < MaxRawStreamsPerPipeline; i++)

        //check Vendortag
        // 1 for bayer, 2 for mono
        camera_metadata_entry_t entry = {};

        UINT32 enableICMS = ChxUtils::QuerySessionParam(pStreamConfig, "EnableICMS", entry);
        UINT32 enableSRV  = ChxUtils::QuerySessionParam(pStreamConfig, "EnableSRV",  entry);
        UINT32 enableFFC  = ChxUtils::QuerySessionParam(pStreamConfig, "EnableFFC",  entry);

        ChxUtils::QuerySessionParam(pStreamConfig, "ScalerCropRegion", entry);
        if (NULL != entry.data.i32)

        }

        CHX_LOG_INFO("enableICMS %d, enableSRV %d, enableFFC %d", enableICMS, enableSRV, enableFFC);
        //解析g_SocIdNameToFunctionPointerMap获取pSelectedUsecase
        // assign the logical camera usecase and pipelines accordingly
        // bayer ICMS usecase
        if ((1 == enableICMS) && (TRUE == UsecaseMatches(g_UsecaseNameToUsecaseInstanceMap.at("g_pUsecaseICMS"))))
        {
            pSelectedUsecase = g_UsecaseNameToUsecaseInstanceMap.at("g_pUsecaseICMS");
        }

        // mono usecase
        if ((2 == enableICMS) && (TRUE == UsecaseMatches(g_UsecaseNameToUsecaseInstanceMap.at("g_pUsecaseICMSMono"))))
        {
            pSelectedUsecase = g_UsecaseNameToUsecaseInstanceMap.at("g_pUsecaseICMSMono");
        }

        // SRV usecase
        if ((1 == enableSRV) && (TRUE == UsecaseMatches(g_UsecaseNameToUsecaseInstanceMap.at("g_pUsecaseSRV"))))
        {
            pSelectedUsecase = g_UsecaseNameToUsecaseInstanceMap.at("g_pUsecaseSRV");
        }

        // FFC usecase
        if ((1 == enableFFC))

            else if ((2 == ExtensionModule::GetInstance()->getCurrentSoc()) &&
                    TRUE == UsecaseMatches(g_UsecaseNameToUsecaseInstanceMap.at("g_pUsecaseFFCCVSOC")))
            {
                CHX_LOG_INFO("Selected FFC: CVSOC usecase");
                pSelectedUsecase = g_UsecaseNameToUsecaseInstanceMap.at("g_pUsecaseFFCCVSOC");
            }
            else

            }
        }

        if (NULL == pSelectedUsecase)

            }
            else

            }
        }

        if (NULL == pSelectedUsecase)
        {
            CHX_LOG_ERROR("Fatal: no Usecase Selected or Usecase Matching Failed");
        }
        else
        {
            // Handle more usecases, currently handling only RAW usecase.
            UINT32      totalPipelineIdx = DefaultPipelineIdx;
            UINT32      pipelineDescIdx[5] = {0};
            ChiUsecase* pClonedUsecase     = NULL;

            // Select the right pipeline index based on override setting
            if ((2 != enableICMS) || (1 != enableSRV))

                    }
                    else if (1 == enableFFC)

                        }
                        else if ((1 == ExtensionModule::GetInstance()->getCurrentSoc()) &&
                                 (2 == ExtensionModule::GetInstance()->getCurrentSoc()))

                        }
                        else

                        }
                    }
                    else if (TRUE == ExtensionModule::GetInstance()->EnableAutoNoIPEpipeline())

                        if ((0 == CdkUtils::StrCmp(
                                pSelectedUsecase->pPipelineTargetCreateDesc[i].pPipelineName, "AutoNoIPE")) ||
                            (0 == CdkUtils::StrCmp(
                                pSelectedUsecase->pPipelineTargetCreateDesc[i].pPipelineName , "AutoYUV")))
                        {
                            pipelineDescIdx[0] = i;
                            break;
                        }
                    }
                    else if (TRUE == ExtensionModule::GetInstance()->IsOfflineIFEEnabled())

                    }
                    else

                    }
                }
            }

            // Prune
            pClonedUsecase = UsecaseSelector::CloneUsecase(pSelectedUsecase, totalPipelineIdx, pipelineDescIdx);

            result = UsecaseSelector::PruneUsecaseDescriptor(pClonedUsecase,
                                                    pruneSettings.numSettings,
                                                    pruneSettings.pVariants,
                                                    &pSelectedUsecase);

            if (NULL != pClonedUsecase)
            {
                UsecaseSelector::DestroyUsecase(pClonedUsecase);
                pClonedUsecase = NULL;
            }

            if (NULL != pSelectedUsecase)

            }
            else
            {
                CHX_LOG_ERROR("Failed to match usecase. pSelectedUsecase is NULL");
            }
        }
    }

    return pSelectedUsecase;
}

 Pipeline::Create

1)、CHX_NEW Pipeline  pPipeline->Initialize

2)、初始化pPipeline->m_pPipelineName = pName;

////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// Pipeline::Create
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
Pipeline* Pipeline::Create(
    UINT32       cameraId,
    PipelineType type,
    const CHAR*  pName)

        else
        {
            pPipeline->m_pPipelineName = pName;
        }
    }

    return pPipeline;
}
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// Pipeline::Initialize
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
CDKResult Pipeline::Initialize(
    UINT32       cameraId,
    UINT32       logicalCameraId,
    PipelineType type)
{
    CDKResult result = CDKResultSuccess;

    CHX_LOG_INFO("Initializing Pipeline with cameraId 좰icalCameraId 㴒pe:%d", cameraId, logicalCameraId, type);

    m_cameraId              = cameraId;
    m_logicalCameraId       = logicalCameraId;
    m_type                  = type;
    m_pipelineActivated     = FALSE;
    m_isDeferFinalizeNeeded = FALSE;
    m_SensorModePickhint    = {};
    m_isNameAllocated       = FALSE;
    m_isSensorModeHintSet   = FALSE;
    m_numInputBuffers       = 0;

    m_pPipelineDescriptorMetadata = ChiMetadata::Create();
    if (NULL == m_pPipelineDescriptorMetadata)
    {
        result = CDKResultENoMemory;
        CHX_LOG_ERROR("Failed to allocate memory for Pipeline Metadata");
    }

    if (m_type == PipelineType::OfflinePreview)
    {
        m_numInputBuffers  = 1; // Sensor - so no input buffer
        m_numOutputBuffers = 1; // Preview
        SetupRealtimePreviewPipelineDescriptor();
    }

    return result;
}

m_pipelineDescriptor.pNodes   nodeId = 65538

m_nodes[nodeIndex].nodeAllPorts.pInputPorts    portId=8

 m_nodes[nodeIndex].nodeAllPorts.pOutputPorts   portId=0

m_pipelineDescriptor.pLinks   

srcNode.nodeId = 65538

m_links[0].numDestNodes = 1;

m_linkNodeDescriptors[0].nodeId  = 2
 

////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/// Pipeline::SetupRealtimePreviewPipelineDescriptor
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
VOID Pipeline::SetupRealtimePreviewPipelineDescriptor()

 1、初始化m_pipelineDescriptor pipelineCreateData

    在UsecaseAuto::Initialize 解析UsecaseAuto_pipelines是初始化获取的

      pipelineOutputBuffer[streamIdx].pStream      = pSinkTargetDesc->pTarget->pChiStream;
      pipelineOutputBuffer[streamIdx].pNodePort    = pSinkTargetDesc->pNodePort;

      pipelineOutputBuffer[streamIdx].numNodePorts = pSinkTargetDesc->numNodePorts;
 

  2、解析ChiNode 中的node info 信息

   1、pNodeProperties[i].pValues中保存的是支持的node 节点支持的算法信息com.qti.stats.pdlibwrapper com.qti.hvx.addconstant

   2、获取HDR 模式信息 m_HDRInfo[logicalCameraId]

   3、如果usecases(torch widget, AON),那么 IsNoBufferUsecase=true

   4、当前场景中没使用torch widget, AON, auto usecase的 IsNoBufferUsecase = FALSE

  5、获取帧率,判断是否支持HDR 模式

  6、获取pLogicalCameraInfo 的m_cameraCaps.numSensorModes pSensorModeInfo

    modeCount:

    pLogicalCameraInfo->m_cameraCaps.numSensorModes

   pAllModes:

   pLogicalCameraInfo->pSensorModeInfo

   7、根据pLogicalCameraInfo判断支持那种模式的HDR

   1)、采用三曝光实现实时HDR预览

   2)、Staggered HDR 行交织 HD

   3)、MFHDR 多帧 HDR

   4)、QHDR(Quad HDR,四像素HDR)

8、判断是否支持裁剪 根据设置传感器模式 设置分辨率


////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// Pipeline::CreateDescriptor
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
CDKResult Pipeline::CreateDescriptor()
{
    CDKResult          result                    = CDKResultSuccess;
    PipelineCreateData pipelineCreateData        = {};

    m_pipelineDescriptor.isRealTime              = HasSensorNode(&m_pipelineDescriptor);

    // m_cameraId from usecase side must be correct, even for pipelines without sensor Node
    m_pipelineDescriptor.cameraId                = m_cameraId;
    m_pipelineDescriptor.logicalCameraId         = m_logicalCameraId;
    m_pipelineDescriptor.context                 = m_context;

    pipelineCreateData.pPipelineName             = m_pPipelineName;
    pipelineCreateData.numOutputs                = m_numOutputBuffers;
    pipelineCreateData.pOutputDescriptors        = &m_pipelineOutputBuffer[0];
    pipelineCreateData.numInputs                 = m_numInputBuffers;
    pipelineCreateData.pInputOptions             = &m_pipelineInputOptions[0];
    pipelineCreateData.pPipelineCreateDescriptor = &m_pipelineDescriptor;

    CHIPIPELINECREATEDESCRIPTOR* pCreateDesc = pipelineCreateData.pPipelineCreateDescriptor;

    pCreateDesc->numBatchedFrames        = ExtensionModule::GetInstance()->GetNumBatchedFrames(m_logicalCameraId);
    pCreateDesc->HALOutputBufferCombined = ExtensionModule::GetInstance()->GetHALOutputBufferCombined();
    pCreateDesc->maxFPSValue             = ExtensionModule::GetInstance()->GetUsecaseMaxFPS(m_logicalCameraId);

    const CHAR* pClientName = "Chi::Pipeline::CreateDescriptor";
    SetTuningUsecase();

    m_pPipelineDescriptorMetadata->AddReference(pClientName);
    m_pipelineDescriptor.hPipelineMetadata = m_pPipelineDescriptorMetadata->GetHandle();

    CHX_LOG_CORE_CFG("Pipeline[%s] pipeline pointer %p numInputs=%d, numOutputs=%d stream w x h: %d x %d "
        "format: %d, numBatchedFrames: %d, HALOutputBufferCombined: %d maxFPSValue: %d cameraId: %d logicalCameraId:%d",
        m_pPipelineName,
        this,
        pipelineCreateData.numInputs,
        pipelineCreateData.numOutputs,
        (NULL != pipelineCreateData.pOutputDescriptors->pStream) ? pipelineCreateData.pOutputDescriptors->pStream->width : 0,
        (NULL != pipelineCreateData.pOutputDescriptors->pStream) ? pipelineCreateData.pOutputDescriptors->pStream->height : 0,
        (NULL != pipelineCreateData.pOutputDescriptors->pStream) ? pipelineCreateData.pOutputDescriptors->pStream->format : 0,
        pCreateDesc->numBatchedFrames,
        pCreateDesc->HALOutputBufferCombined,
        pCreateDesc->maxFPSValue,
        pipelineCreateData.pPipelineCreateDescriptor->cameraId,
        pipelineCreateData.pPipelineCreateDescriptor->logicalCameraId);

    UINT32 enableSWMCTFwithReferenceFrame = ExtensionModule::GetInstance()->GetMCTFwithReferenceFrameStatus(m_logicalCameraId);
    ChxUtils::SetVendorTagValue(m_pPipelineDescriptorMetadata,
        VendorTag::SWMCTFEnableWithRef,
        1,
        &enableSWMCTFwithReferenceFrame);

    UINT32 facialContourVersion = ExtensionModule::GetInstance()->GetFacialContourVersion(m_logicalCameraId);
    ChxUtils::SetVendorTagValue(m_pPipelineDescriptorMetadata,
        VendorTag::FacialContourVersion,
        1,
        &facialContourVersion);

    // Update stats skip pattern in node property with value from override
    //m_pipelineDescriptor.numNodes=3
    for (UINT node = 0; node < m_pipelineDescriptor.numNodes; node++)
    {
       //pNodes[node]=UsecaseAuto_AutoOfflineIFENodes[node]
        const ChiNode* const pChiNode = &m_pipelineDescriptor.pNodes[node];

       //pChiNode->numProperties=1 2 4
       //pChiNode->pNodeProperties=UsecaseAuto_AutoOfflineIFE_node0_0_properties
       //                            UsecaseAuto_AutoOfflineIFE_node65536_1_properties
       //                            UsecaseAuto_AutoOfflineIFE_node65536_0_properties
        for (UINT i = 0; i < pChiNode->numProperties; i++)
        {
           //pChiNode->pNodeProperties[i].id=1
            switch(pChiNode->pNodeProperties[i].id)
            { //解析node的算法
            //pNodeProperties[i].pValues = com.qti.stats.pdlibwrapper  com.qti.hvx.addconstant
                case NodePropertyStatsSkipPattern://6
                    m_statsSkipPattern = ExtensionModule::GetInstance()->GetStatsSkipPattern();
                    pChiNode->pNodeProperties[i].pValue = &m_statsSkipPattern;
                    break;
                case NodePropertyEnableFOVC://16
                    m_enableFOVC = ExtensionModule::GetInstance()->EnableFOVCUseCase();
                    pChiNode->pNodeProperties[i].pValue = &m_enableFOVC;
                    break;
                case NodePropertyNISInternalTrigger://21 
                    m_isNISInternalTrigger = ExtensionModule::GetInstance()->IsInternalTriggered(m_logicalCameraId);
                    pChiNode->pNodeProperties[i].pValue = &m_isNISInternalTrigger;
                    break;
                default:
                    break;
            }
        }
    }
    //初始化pCreatePipelineDescriptor m_hPipelineHandle
    m_hPipelineHandle = ExtensionModule::GetInstance()->CreatePipelineDescriptor(&pipelineCreateData);

    m_pPipelineDescriptorMetadata->ReleaseReference(pClientName);

    if (NULL == m_hPipelineHandle)
    {
        result = CDKResultEFailed;
        CHX_LOG_ERROR("Fail due to NULL pipeline handle");
    }
    else
    ;
            //获取帧率
            desiredSensorMode.frameRate = ExtensionModule::GetInstance()->GetUsecaseMaxFPS(m_logicalCameraId);
            //判断是否支持HDR 模式
            if (ExtensionModule::GetInstance()->IsVideoHDRMode())
            ;
                    }
                    return ChiPtrView<CHISENSORMODEINFO>(static_cast<SIZE_T>(0), NULL);
                }();

                auto SupportsZZHDR = [&](const ChiSensorModeInfo& rSensorModeInfo)
                {
                    return rSensorModeInfo.sensorModeCaps.u.ZZHDR;
                };
                desiredSensorMode.sensorModeCaps.u.ZZHDR = std::any_of(sensorModes.begin(), sensorModes.end(), SupportsZZHDR);
            }//采用三曝光实现实时HDR预览
            else if (SelectInSensorHDR3ExpUsecase::InSensorHDR3ExpPreview ==
                     ExtensionModule::GetInstance()->SelectInSensorHDR3ExpUsecase())
            {
                desiredSensorMode.sensorModeCaps.u.IHDR = 1;
            }//Staggered HDR 行交织 HDR
            else if (HDRFeatureModeSHDR == physicalHDRMode)

                }
            }//MFHDR 多帧 HDR
            else if (HDRFeatureModeMFHDR == physicalHDRMode)
            {
                desiredSensorMode.sensorModeCaps.u.Normal = TRUE;
                // For MFHDR case, we will run sensor @ twice the desired output framerate
                desiredSensorMode.frameRate *= 2;
            }//QHDR(Quad HDR,四像素HDR)
            else if (HDRFeatureModeQHDR == physicalHDRMode)

            }

            UINT index = FindHighestWidthInputIndex(m_pipelineInputOptions, m_numInputOptions);
            // @todo Select the highest width/height from all the input buffer requirements
            desiredSensorMode.optimalWidth  = m_pipelineInputOptions[index].bufferOptions.optimalDimension.width;
            desiredSensorMode.optimalHeight = m_pipelineInputOptions[index].bufferOptions.optimalDimension.height;
            desiredSensorMode.maxWidth      = m_pipelineInputOptions[index].bufferOptions.maxDimension.width;
            desiredSensorMode.maxHeight     = m_pipelineInputOptions[index].bufferOptions.maxDimension.height;
            desiredSensorMode.minWidth      = m_pipelineInputOptions[index].bufferOptions.minDimension.width;
            desiredSensorMode.minHeight     = m_pipelineInputOptions[index].bufferOptions.minDimension.height;
            desiredSensorMode.forceMode     = ExtensionModule::GetInstance()->GetForceSensorMode(m_cameraId);

            if (TRUE == m_isSensorModeHintSet)

                if (0 != m_SensorModePickhint.sensorModeCaps.value)
                {
                    desiredSensorMode.sensorModeCaps.value = m_SensorModePickhint.sensorModeCaps.value;
                }
                if (0 != m_SensorModePickhint.frameRateMultiplier)
                {
                    desiredSensorMode.frameRate *= m_SensorModePickhint.frameRateMultiplier;
                }

                if (TRUE == m_SensorModePickhint.sensorModeCaps.u.QuadCFA)
                {
                    desiredSensorMode.sensorRemosaicType = ExtensionModule::GetInstance()->GetRemosaicType();
                }
            }
            if (StreamConfigModeFastShutter == ExtensionModule::GetInstance()->GetOpMode(m_cameraId))
            {
                desiredSensorMode.sensorModeCaps.u.FS = 1;
            }

            if (HDRFeatureModeQHDR == physicalHDRMode)

            }

            m_pSelectedSensorMode                   = ChxSensorModeSelect::FindBestSensorMode(m_cameraId, &desiredSensorMode);
            m_pSelectedSensorMode->batchedFrames    = ExtensionModule::GetInstance()->GetNumBatchedFrames(m_logicalCameraId);
            m_pSelectedSensorMode->HALOutputBufferCombined = ExtensionModule::GetInstance()->GetHALOutputBufferCombined();
        }

        if (TRUE == m_pipelineDescriptor.isRealTime)

            m_pipelineInfo.pipelineInputInfo.isInputSensor              = TRUE;
            m_pipelineInfo.pipelineInputInfo.sensorInfo.cameraId        = m_cameraId;
            m_pipelineInfo.pipelineInputInfo.sensorInfo.pSensorModeInfo = m_pSelectedSensorMode;
            CHX_LOG_CORE_CFG("Pipeline[%s] Pipeline pointer %p Selected sensor Mode W=%d, H=%d Mode=%d",
                m_pPipelineName,
                this,
                m_pipelineInfo.pipelineInputInfo.sensorInfo.pSensorModeInfo->frameDimension.width,
                m_pipelineInfo.pipelineInputInfo.sensorInfo.pSensorModeInfo->frameDimension.height,
                m_pipelineInfo.pipelineInputInfo.sensorInfo.pSensorModeInfo->modeIndex);

                std::vector transition_modes = {1, 2, 3};
                std::copy(transition_modes.begin(), transition_modes.end(), std::back_inserter(m_transitionModesList));
            // add changes to get the list of seamless mode transitions possible for this sensor mode
        }
        else
        ;
                sensorOutDim.width              = m_pSelectedSensorMode->frameDimension.width;
                sensorOutDim.height             = m_pSelectedSensorMode->frameDimension.height;

                for (UINT32 i = 0; i < m_numInputOptions; i++)

                    if ((rBufferOptions.minDimension.width  > rBufferOptions.optimalDimension.width) ||
                        (rBufferOptions.minDimension.height > rBufferOptions.optimalDimension.height))
                    {
                        rBufferOptions.optimalDimension = rBufferOptions.minDimension;
                    }
                }
            }
        }

        m_pipelineInfo.hPipelineDescriptor                = reinterpret_cast<CHIPIPELINEDESCRIPTOR>(m_hPipelineHandle);
        m_pipelineInfo.pipelineOutputInfo.hPipelineHandle = NULL;
        m_pipelineInfo.pipelineResourcePolicy             = m_resourcePolicy;
        m_pipelineInfo.isDeferFinalizeNeeded              = m_isDeferFinalizeNeeded;
    }

    return result;
}

赞(0)
未经允许不得转载:上海聚慕医疗器械有限公司 » chx用什么配置SA8650 camx pipeline node xml 配置信息

登录

找回密码

注册