算法概覽
為了給用戶提供更好的成像效果,現(xiàn)在的手機(jī)都會(huì)接入一些第三方的圖像處理算法。MTK平臺(tái)的HAL3也在P2這一層提供接入的plugin。按圖像處理算法需要的幀數(shù)和攝像頭數(shù)量,大體可以分為三類:
單幀算法:
常見的單幀算法有:美顏算法(瘦臉、磨皮、大眼)、廣角鏡頭畸變校正算法、附加表情算法、單攝背景虛化算法(偽雙攝算法)等等,僅需單幀圖像輸入的算法都屬于單幀算法。一般情況下,輸入一幀圖像,算法處理完輸出一幀處理后的圖像。
多幀算法:
常見的多幀算法有:MFNR(多幀降噪)、HDR(高動(dòng)態(tài)范圍)等等,需要連續(xù)多幀圖像輸入的算法都屬于多幀算法。一般情況下,輸入連續(xù)多幀圖像,算法處理完輸出一幀處理后的圖像。
雙攝算法:
最常見的雙攝算法是雙攝景深算法或者叫雙攝背景虛化算法,除此之外,也有彩色+黑白用于增強(qiáng)夜拍效果的雙攝算法。單幀算法和多幀算法僅需要獲取一個(gè)攝像頭的圖像。而雙攝算法需要獲取主、輔兩個(gè)攝像頭的圖像,并且一般還會(huì)要求主、輔攝像頭同步。分別獲取主、輔攝像頭的兩幀同步圖像,處理后輸出一幀主攝圖像,用戶也僅能看到主攝圖像。
根據(jù)這個(gè)大體上的分類,MTK HAL算法集成系列文章共三篇:
MTK HAL算法集成之單幀算法
MTK HAL算法集成之多幀算法
MTK HAL算法集成之雙攝算法
本文是其中的第一篇。這個(gè)系列文章均基于Android 9.0,MT6763平臺(tái),HAL版本是HAL3。
一、算法集成前的準(zhǔn)備
在開展集成工作之前,首先要對(duì)算法有一個(gè)基本的評(píng)估,并且對(duì)于集成也應(yīng)有一定的要求。
1. 1 算法要求及評(píng)估
處理效果好,不能比競(jìng)品差,超過競(jìng)品更佳。(這條和camera調(diào)試的主觀效果一樣,主觀性較強(qiáng),往往一廂情愿,具體看項(xiàng)目要求吧)
各個(gè)場(chǎng)景及壓力測(cè)試下效果穩(wěn)定。
處理后照片無色差、銳度和飽和度無損失,或者損失在可接受范圍。
達(dá)到可接受的分辨率,最好可達(dá)到攝像頭的最大分辨率。
處理時(shí)間越快越好,不超過競(jìng)品時(shí)間、不超過項(xiàng)目和產(chǎn)品的目標(biāo)時(shí)間。
無內(nèi)存泄露,占用內(nèi)存少。
提供必要的集成說明文檔,包括算法類型、輸入及輸出圖像要求、輸入參數(shù)要求等等。
注意:如果有條件,處理時(shí)間、內(nèi)存占用、分辨率等等可量化的指標(biāo)可要求算法提供方給出具體的參考數(shù)據(jù),以便集成完后測(cè)試驗(yàn)證。
1.2 算法集成要求
編譯時(shí)可根據(jù)項(xiàng)目控制是否集成算法。
運(yùn)行時(shí)可以用參數(shù)控制是否啟用算法。
集成算法庫(kù)正常運(yùn)行、壓力測(cè)試下效果穩(wěn)定、無內(nèi)存泄露。
1.3 算法集成的步驟
(1).根據(jù)算法選擇feature類型,如果與MTK提供的feature不能對(duì)號(hào)入座,則需要添加自定義feature。
(2).將算法對(duì)應(yīng)的feature類型添加到scenario配置表。
(3).根據(jù)算法選擇plugin類型,編寫CPP文件實(shí)現(xiàn)plugin,掛載算法。
(4).如果算法不能復(fù)用Android和MTK提供的metadata,則還需要為算法配置自定義的metadata以便APP控制是否啟用算法。
首先,我準(zhǔn)備了一個(gè)libwatermark.so,它僅僅實(shí)現(xiàn)了一個(gè)添加水印的功能,用它來模擬第三方的單幀算法庫(kù)。如果想了解添加水印的實(shí)現(xiàn)代碼,可以參考我另外一篇文章:Android 實(shí)現(xiàn)圖片加水印或logo。接下來,我們就按照集成步驟,逐步詳細(xì)講解。
二、 為算法選擇feature
2.1 MTK提供的feature
MTK在mtk_feature_type.h和customer_feature_type.h已經(jīng)提供了一些feature。
vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/3rdparty/mtk/mtk_feature_type.h:
NO_FEATURE_NORMAL = 0ULL, // MTK (bit 0-31) MTK_FEATURE_MFNR = 1ULL << 0, MTK_FEATURE_HDR = 1ULL << 1, MTK_FEATURE_REMOSAIC = 1ULL << 2, MTK_FEATURE_ABF = 1ULL << 3, MTK_FEATURE_NR = 1ULL << 4, MTK_FEATURE_FB = 1ULL << 5, MTK_FEATURE_CZ = 1ULL << 6, MTK_FEATURE_DRE = 1ULL << 7, MTK_FEATURE_DEPTH = 1ULL << 8, MTK_FEATURE_BOKEH = 1ULL << 9, MTK_FEATURE_VSDOF = (MTK_FEATURE_DEPTH|MTK_FEATURE_BOKEH), MTK_FEATURE_FSC = 1ULL << 10, MTK_FEATURE_3DNR = 1ULL << 11, MTK_FEATURE_EIS = 1ULL << 12, MTK_FEATURE_AINR = 1ULL << 13, MTK_FEATURE_DUAL_YUV = 1ULL << 14, MTK_FEATURE_DUAL_HWDEPTH = 1ULL << 15, MTK_FEATURE_AIS = 1ULL << 16, MTK_FEATURE_HFG = 1ULL << 17, MTK_FEATURE_DCE = 1ULL << 18,
vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/3rdparty/customer/customer_feature_type.h:
// ThirdParty (bit 32-63) TP_FEATURE_HDR = 1ULL << 32, TP_FEATURE_MFNR = 1ULL << 33, TP_FEATURE_EIS = 1ULL << 34, TP_FEATURE_FB = 1ULL << 35, TP_FEATURE_FILTER = 1ULL << 36, TP_FEATURE_DEPTH = 1ULL << 37, TP_FEATURE_BOKEH = 1ULL << 38, TP_FEATURE_VSDOF = (TP_FEATURE_DEPTH|TP_FEATURE_BOKEH), TP_FEATURE_FUSION = 1ULL << 39, TP_FEATURE_HDR_DC = 1ULL << 40, // used by DualCam TP_FEATURE_DUAL_YUV = 1ULL << 41, TP_FEATURE_DUAL_HWDEPTH = 1ULL << 42, TP_FEATURE_PUREBOKEH = 1ULL << 43, TP_FEATURE_RAW_HDR = 1ULL << 44, TP_FEATURE_RELIGHTING = 1ULL << 45,
MTK提供的這些feature可以滿足絕大多數(shù)算法的集成,在可以對(duì)號(hào)入座的情況下,我們直接使用已有feature即可。如果不能夠滿足我們的要求,可以參考下節(jié)內(nèi)容添加新的feature。
2.2 添加自定義feature
本來單幀算法對(duì)應(yīng)的feature可以選擇MTK提供的MTK_FEATURE_FB和TP_FEATURE_FB,但是為了講解如何添加新feature,我們選擇添加一個(gè)自定義feature:TP_FEATURE_WATERMARK。
vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/3rdparty/customer/customer_feature_type.h:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/3rdparty/customer/customer_feature_type.h b/vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/3rdparty/customer/customer_feature_type.h old mode 100644 new mode 100755 index a41fd864f5..17bc35eea8 --- a/vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/3rdparty/customer/customer_feature_type.h +++ b/vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/3rdparty/customer/customer_feature_type.h @@ -59,6 +59,7 @@ enum eFeatureIndexCustomer { TP_FEATURE_PUREBOKEH = 1ULL << 43, TP_FEATURE_RAW_HDR = 1ULL << 44, TP_FEATURE_RELIGHTING = 1ULL << 45, + TP_FEATURE_WATERMARK = 1ULL << 46, // TODO: reserve for customer feature index (bit 32-63) };
vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/CaptureFeature_Common.cpp:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/CaptureFeature_Common.cpp b/vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/CaptureFeature_Common.cpp old mode 100644 new mode 100755 index e32f80a609..47273b01c7 --- a/vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/CaptureFeature_Common.cpp +++ b/vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/CaptureFeature_Common.cpp @@ -599,6 +599,7 @@ const char* FeatID2Name(FeatureID_T fid) case FID_FUSION_3RD_PARTY: return "fusion_3rd_party"; case FID_PUREBOKEH_3RD_PARTY: return "purebokeh_3rd_party"; case FID_RELIGHTING_3RD_PARTY: return "relighting_3rd_party"; + case FID_WATERMARK_3RD_PARTY: return "watermark_3rd_party"; default: return "unknown"; };
vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/nodes/YUVNode.cpp:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/nodes/YUVNode.cpp b/vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/nodes/YUVNode.cpp index 8bb794ba02..d4343aaccf 100755 --- a/vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/nodes/YUVNode.cpp +++ b/vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/nodes/YUVNode.cpp @@ -779,7 +779,8 @@ MBOOL YUVNode::onInit() featId = FID_FB_3RD_PARTY; else if (rProperty.mFeatures & TP_FEATURE_RELIGHTING) featId = FID_RELIGHTING_3RD_PARTY; - + else if (rProperty.mFeatures & TP_FEATURE_WATERMARK) + featId = FID_WATERMARK_3RD_PARTY; if (featId != NULL_FEATURE) { MY_LOGD_IF(mLogLevel, "%s finds plugin:%s, priority:%d",
vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/feature/featurePipe/ICaptureFeaturePipe.h:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/feature/featurePipe/ICaptureFeaturePipe.h b/vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/feature/featurePipe/ICaptureFeaturePipe.h old mode 100644 new mode 100755 index 2f1ad8a665..ab47aae456 --- a/vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/feature/featurePipe/ICaptureFeaturePipe.h +++ b/vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/feature/featurePipe/ICaptureFeaturePipe.h @@ -172,6 +172,7 @@ enum CaptureFeatureFeatureID { FID_FUSION_3RD_PARTY, FID_PUREBOKEH_3RD_PARTY, FID_RELIGHTING_3RD_PARTY, + FID_WATERMARK_3RD_PARTY, NUM_OF_FEATURE, NULL_FEATURE = 0xFF, };
vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/hwnode/p2/P2_CaptureProcessor.cpp:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/hwnode/p2/P2_CaptureProcessor.cpp b/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/hwnode/p2/P2_CaptureProcessor.cpp old mode 100644 new mode 100755 index cc1dc549fd..00559cbc30 --- a/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/hwnode/p2/P2_CaptureProcessor.cpp +++ b/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/hwnode/p2/P2_CaptureProcessor.cpp @@ -428,6 +428,9 @@ MBOOL CaptureProcessor::onEnque(const sp&pP2Frame) pCapRequest->addFeature(FID_HFG); if (feature & MTK_FEATURE_DCE) pCapRequest->addFeature(FID_DCE); + if (feature & TP_FEATURE_WATERMARK) + pCapRequest->addFeature(FID_WATERMARK_3RD_PARTY); + } }
三、 將算法對(duì)應(yīng)的feature添加到scenario配置表
在我們打開camera進(jìn)行預(yù)覽和拍照的時(shí)候,MTK HAL3會(huì)執(zhí)行vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/policy/FeatureSettingPolicy.cpp的代碼,會(huì)分別調(diào)用
vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/scenario_mgr.cpp的
get_streaming_scenario函數(shù)和get_capture_scenario函數(shù)。它們會(huì)讀取一個(gè)scenario的feature配置表,遍歷所有的feature,決定哪些feature會(huì)被執(zhí)行。這個(gè)配置表中有許多的scenario,一個(gè)scenario可能對(duì)應(yīng)多個(gè)feature。因此添加自定義feature后,還需將自定義的feature添加到配置表中。MTK feature 對(duì)應(yīng)的配置表是 gMtkScenarioFeaturesMaps,customer feature 對(duì)應(yīng)的配置表是 gCustomerScenarioFeaturesMaps。
vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/customer_scenario_mgr.cpp:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/customer_scenario_mgr.cpp b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/customer_scenario_mgr.cpp old mode 100644 new mode 100755 index f8d081e433..577f85797e --- a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/customer_scenario_mgr.cpp +++ b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/customer_scenario_mgr.cpp @@ -93,30 +93,30 @@ using namespace NSCam::v3::pipeline::policy::scenariomgr; // #define(key feature | post-processing features | ...) // // single cam capture feature combination -#define TP_FEATURE_COMBINATION_SINGLE (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB) -#define TP_FEATURE_COMBINATION_HDR (TP_FEATURE_HDR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB) -#define TP_FEATURE_COMBINATION_AINR (MTK_FEATURE_AINR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB) -#define TP_FEATURE_COMBINATION_MFNR (MTK_FEATURE_MFNR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB) -#define TP_FEATURE_COMBINATION_REMOSAIC (MTK_FEATURE_REMOSAIC| MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB) +#define TP_FEATURE_COMBINATION_SINGLE (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_WATERMARK) +#define TP_FEATURE_COMBINATION_HDR (TP_FEATURE_HDR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_WATERMARK) +#define TP_FEATURE_COMBINATION_AINR (MTK_FEATURE_AINR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_WATERMARK) +#define TP_FEATURE_COMBINATION_MFNR (MTK_FEATURE_MFNR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_WATERMARK) +#define TP_FEATURE_COMBINATION_REMOSAIC (MTK_FEATURE_REMOSAIC| MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_WATERMARK) #define TP_FEATURE_COMBINATION_CSHOT (NO_FEATURE_NORMAL | MTK_FEATURE_CZ| MTK_FEATURE_HFG) -#define TP_FEATURE_COMBINATION_YUV_REPROCESS (NO_FEATURE_NORMAL | MTK_FEATURE_NR| TP_FEATURE_FB) -#define TP_FEATURE_COMBINATION_RAW_REPROCESS (NO_FEATURE_NORMAL | MTK_FEATURE_NR| TP_FEATURE_FB) +#define TP_FEATURE_COMBINATION_YUV_REPROCESS (NO_FEATURE_NORMAL | MTK_FEATURE_NR| TP_FEATURE_FB| TP_FEATURE_WATERMARK) +#define TP_FEATURE_COMBINATION_RAW_REPROCESS (NO_FEATURE_NORMAL | MTK_FEATURE_NR| TP_FEATURE_FB| TP_FEATURE_WATERMARK) #define TP_FEATURE_COMBINATION_PRO (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE) -#define TP_FEATURE_COMBINATION_SUPER_NIGHT_RAW_REPROCESS (NO_FEATURE_NORMAL | MTK_FEATURE_NR| TP_FEATURE_FB) +#define TP_FEATURE_COMBINATION_SUPER_NIGHT_RAW_REPROCESS (NO_FEATURE_NORMAL | MTK_FEATURE_NR| TP_FEATURE_FB| TP_FEATURE_WATERMARK) // dual cam capture feature combination // the VSDOF means the combination of Bokeh feature and Depth feature -#define TP_FEATURE_COMBINATION_TP_VSDOF (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_VSDOF) -#define TP_FEATURE_COMBINATION_TP_VSDOF_HDR (TP_FEATURE_HDR_DC | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_VSDOF) -#define TP_FEATURE_COMBINATION_TP_VSDOF_MFNR (MTK_FEATURE_MFNR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_VSDOF) -#define TP_FEATURE_COMBINATION_TP_FUSION (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_FUSION) -#define TP_FEATURE_COMBINATION_TP_PUREBOKEH (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_PUREBOKEH) +#define TP_FEATURE_COMBINATION_TP_VSDOF (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_VSDOF| TP_FEATURE_WATERMARK) +#define TP_FEATURE_COMBINATION_TP_VSDOF_HDR (TP_FEATURE_HDR_DC | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_VSDOF| TP_FEATURE_WATERMARK) +#define TP_FEATURE_COMBINATION_TP_VSDOF_MFNR (MTK_FEATURE_MFNR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_VSDOF| TP_FEATURE_WATERMARK) +#define TP_FEATURE_COMBINATION_TP_FUSION (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_FUSION| TP_FEATURE_WATERMARK) +#define TP_FEATURE_COMBINATION_TP_PUREBOKEH (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_PUREBOKEH| TP_FEATURE_WATERMARK) // streaming feature combination (TODO: it should be refined by streaming scenario feature) -#define TP_FEATURE_COMBINATION_VIDEO_NORMAL (MTK_FEATURE_FB|TP_FEATURE_FB) -#define TP_FEATURE_COMBINATION_VIDEO_DUAL_YUV (MTK_FEATURE_FB|MTK_FEATURE_DUAL_YUV|TP_FEATURE_FB|TP_FEATURE_DUAL_YUV) -#define TP_FEATURE_COMBINATION_VIDEO_DUAL_HWDEPTH (MTK_FEATURE_FB|MTK_FEATURE_DUAL_HWDEPTH|TP_FEATURE_FB|TP_FEATURE_DUAL_HWDEPTH) -#define TP_FEATURE_COMBINATION_VIDEO_DUAL_HWVSDOF (MTK_FEATURE_FB|TP_FEATURE_FB) +#define TP_FEATURE_COMBINATION_VIDEO_NORMAL (MTK_FEATURE_FB|TP_FEATURE_FB|TP_FEATURE_WATERMARK) +#define TP_FEATURE_COMBINATION_VIDEO_DUAL_YUV (MTK_FEATURE_FB|MTK_FEATURE_DUAL_YUV|TP_FEATURE_FB|TP_FEATURE_DUAL_YUV|TP_FEATURE_WATERMARK) +#define TP_FEATURE_COMBINATION_VIDEO_DUAL_HWDEPTH (MTK_FEATURE_FB|MTK_FEATURE_DUAL_HWDEPTH|TP_FEATURE_FB|TP_FEATURE_DUAL_HWDEPTH|TP_FEATURE_WATERMARK) +#define TP_FEATURE_COMBINATION_VIDEO_DUAL_HWVSDOF (MTK_FEATURE_FB|TP_FEATURE_FB|TP_FEATURE_WATERMARK) // ====================================================================================================== // /******************************************************************************
vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp old mode 100644 new mode 100755 index 011f551354..f14ff8a6e2 --- a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp +++ b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp @@ -89,29 +89,29 @@ using namespace NSCam::v3::pipeline::policy::scenariomgr; // #define(key feature | post-processing features | ...) // // single cam capture feature combination -#define MTK_FEATURE_COMBINATION_SINGLE (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB) -#define MTK_FEATURE_COMBINATION_HDR (TP_FEATURE_HDR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB) -#define MTK_FEATURE_COMBINATION_AINR (MTK_FEATURE_AINR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB) -#define MTK_FEATURE_COMBINATION_MFNR (MTK_FEATURE_MFNR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB) -#define MTK_FEATURE_COMBINATION_REMOSAIC (MTK_FEATURE_REMOSAIC| MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB) +#define MTK_FEATURE_COMBINATION_SINGLE (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_WATERMARK) +#define MTK_FEATURE_COMBINATION_HDR (TP_FEATURE_HDR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_WATERMARK) +#define MTK_FEATURE_COMBINATION_AINR (MTK_FEATURE_AINR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_WATERMARK) +#define MTK_FEATURE_COMBINATION_MFNR (MTK_FEATURE_MFNR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_WATERMARK) +#define MTK_FEATURE_COMBINATION_REMOSAIC (MTK_FEATURE_REMOSAIC| MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_WATERMARK) #define MTK_FEATURE_COMBINATION_CSHOT (NO_FEATURE_NORMAL | MTK_FEATURE_CZ| MTK_FEATURE_HFG) -#define MTK_FEATURE_COMBINATION_YUV_REPROCESS (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_FB) -#define MTK_FEATURE_COMBINATION_RAW_REPROCESS (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_FB) -#define MTK_FEATURE_COMBINATION_SUPER_NIGHT_RAW_REPROCESS (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_FB) +#define MTK_FEATURE_COMBINATION_YUV_REPROCESS (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_FB| TP_FEATURE_WATERMARK) +#define MTK_FEATURE_COMBINATION_RAW_REPROCESS (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_FB| TP_FEATURE_WATERMARK) +#define MTK_FEATURE_COMBINATION_SUPER_NIGHT_RAW_REPROCESS (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_FB| TP_FEATURE_WATERMARK) // dual cam capture feature combination // the VSDOF means the combination of Bokeh feature and Depth feature -#define MTK_FEATURE_COMBINATION_TP_VSDOF (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_VSDOF) -#define MTK_FEATURE_COMBINATION_TP_VSDOF_HDR (TP_FEATURE_HDR_DC | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_VSDOF) -#define MTK_FEATURE_COMBINATION_TP_VSDOF_MFNR (MTK_FEATURE_MFNR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_VSDOF) -#define MTK_FEATURE_COMBINATION_TP_FUSION (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_FUSION) -#define MTK_FEATURE_COMBINATION_TP_PUREBOKEH (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_PUREBOKEH) +#define MTK_FEATURE_COMBINATION_TP_VSDOF (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_VSDOF| TP_FEATURE_WATERMARK) +#define MTK_FEATURE_COMBINATION_TP_VSDOF_HDR (TP_FEATURE_HDR_DC | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_VSDOF| TP_FEATURE_WATERMARK) +#define MTK_FEATURE_COMBINATION_TP_VSDOF_MFNR (MTK_FEATURE_MFNR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_VSDOF| TP_FEATURE_WATERMARK) +#define MTK_FEATURE_COMBINATION_TP_FUSION (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_FUSION| TP_FEATURE_WATERMARK) +#define MTK_FEATURE_COMBINATION_TP_PUREBOKEH (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_PUREBOKEH| TP_FEATURE_WATERMARK) // streaming feature combination (TODO: it should be refined by streaming scenario feature) -#define MTK_FEATURE_COMBINATION_VIDEO_NORMAL (MTK_FEATURE_FB|TP_FEATURE_FB) -#define MTK_FEATURE_COMBINATION_VIDEO_DUAL_YUV (MTK_FEATURE_FB|MTK_FEATURE_DUAL_YUV|TP_FEATURE_FB|TP_FEATURE_DUAL_YUV) -#define MTK_FEATURE_COMBINATION_VIDEO_DUAL_HWDEPTH (MTK_FEATURE_FB|MTK_FEATURE_DUAL_HWDEPTH|TP_FEATURE_FB|TP_FEATURE_DUAL_HWDEPTH) -#define MTK_FEATURE_COMBINATION_VIDEO_DUAL_HWVSDOF (MTK_FEATURE_FB|TP_FEATURE_FB) +#define MTK_FEATURE_COMBINATION_VIDEO_NORMAL (MTK_FEATURE_FB|TP_FEATURE_FB|TP_FEATURE_WATERMARK) +#define MTK_FEATURE_COMBINATION_VIDEO_DUAL_YUV (MTK_FEATURE_FB|MTK_FEATURE_DUAL_YUV|TP_FEATURE_FB|TP_FEATURE_DUAL_YUV|TP_FEATURE_WATERMARK) +#define MTK_FEATURE_COMBINATION_VIDEO_DUAL_HWDEPTH (MTK_FEATURE_FB|MTK_FEATURE_DUAL_HWDEPTH|TP_FEATURE_FB|TP_FEATURE_DUAL_HWDEPTH|TP_FEATURE_WATERMARK) +#define MTK_FEATURE_COMBINATION_VIDEO_DUAL_HWVSDOF (MTK_FEATURE_FB|TP_FEATURE_FB|TP_FEATURE_WATERMARK) // ====================================================================================================== // /******************************************************************************
注意:
MTK在Android Q(10.0)上優(yōu)化了scenario配置表的客制化,Android Q及更高版本,scenario需要在:
vendor/mediatek/proprietary/custom/[platform]/hal/camera/camera_custom_feature_table.cpp中配置,[platform]是諸如mt6580,mt6763之類的。
將自定義feature添加到scenario配置表時(shí),不可貪多,只要添加到合適的scenario就行,多了可能多個(gè)算法會(huì)有沖突。如果僅在簡(jiǎn)單場(chǎng)景,添加到MTK_FEATURE_COMBINATION_SINGLE和TP_FEATURE_COMBINATION_SINGLE就可以滿足絕大多數(shù)需求。(2021-02-02更新)
四、掛載算法
4.1 為算法選擇plugin
MTK HAL3在vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/3rdparty/plugin/PipelinePluginType.h 中將三方算法的掛載點(diǎn)大致分為以下幾類:
BokehPlugin: Bokeh算法掛載點(diǎn),雙攝景深算法的虛化部分。
DepthPlugin: Depth算法掛載點(diǎn),雙攝景深算法的計(jì)算深度部分。
FusionPlugin: Depth和Bokeh放在1個(gè)算法中,即合并的雙攝景深算法掛載點(diǎn)。
JoinPlugin: Streaming相關(guān)算法掛載點(diǎn),預(yù)覽算法都掛載在這里。
MultiFramePlugin: 多幀算法掛載點(diǎn),包括YUV與RAW,例如MFNR/HDR
RawPlugin: RAW算法掛載點(diǎn),例如remosaic
YuvPlugin: Yuv單幀算法掛載點(diǎn),例如美顏、廣角鏡頭畸變校正等
對(duì)號(hào)入座,將要集成的算法選擇相應(yīng)的plugin。這里是單幀算法,所以預(yù)覽我們選擇JoinPlugin,拍照選擇YuvPlugin。
4.2 編寫算法集成文件
參照FBImpl.cpp和sample_streaming_fb.cpp中分別實(shí)現(xiàn)拍照和預(yù)覽。目錄結(jié)構(gòu)如下:
vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/tp_watermark/
├── Android.mk
├── include
│ └── watermark.h
├── lib
│ ├── arm64-v8a
│ │ └── libwatermark.so
│ └── armeabi-v7a
│ └── libwatermark.so
├── res
│ └── watermark.rgba
├── WatermarkCapture.cpp
└── WatermarkPreview.cpp
文件說明:
Android.mk中配置算法庫(kù)、頭文件、集成的源代碼CPP文件編譯成庫(kù)libmtkcam.plugin.tp_watermark,供libmtkcam_3rdparty.customer依賴調(diào)用。
集成的源代碼CPP文件,WatermarkCapture.cpp用于拍照,WatermarkPreview.cpp用于預(yù)覽。
libwatermark.so實(shí)現(xiàn)了添加水印的功能,libwatermark.so用來模擬需要接入的第三方算法庫(kù)。watermark.h是頭文件。
watermark.rgba是對(duì)應(yīng)的水印文件。
4.2.1 添加全局宏控
為了能控制某個(gè)項(xiàng)目是否集成此算法,我們?cè)赿evice/mediateksample/k63v2_64_bsp/ProjectConfig.mk中添加一個(gè)宏,用于控制新接入算法的編譯:
QXT_WATERMARK_SUPPORT = yes
當(dāng)某個(gè)項(xiàng)目不需要新接入的算法時(shí),將device/mediateksample/[platform]/ProjectConfig.mk的QXT_WA_SUPPORT的值設(shè)為 no 就可以了。
4.2.2 mtkcam3/3rdparty/customer/tp_watermark/Android.mk
ifeq ($(QXT_WATERMARK_SUPPORT),yes) LOCAL_PATH := $(call my-dir) include $(CLEAR_VARS) LOCAL_MODULE := libwatermark LOCAL_SRC_FILES_32 := lib/armeabi-v7a/libwatermark.so LOCAL_SRC_FILES_64 := lib/arm64-v8a/libwatermark.so LOCAL_MODULE_TAGS := optional LOCAL_MODULE_CLASS := SHARED_LIBRARIES LOCAL_MODULE_SUFFIX := .so LOCAL_PROPRIETARY_MODULE := true LOCAL_MULTILIB := both include $(BUILD_PREBUILT) ################################################################################ include $(CLEAR_VARS) #----------------------------------------------------------- include $(TOP)/$(MTK_PATH_SOURCE)/hardware/mtkcam/mtkcam.mk #----------------------------------------------------------- LOCAL_SRC_FILES += WatermarkCapture.cpp LOCAL_SRC_FILES += WatermarkPreview.cpp #----------------------------------------------------------- LOCAL_C_INCLUDES += $(MTKCAM_C_INCLUDES) LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_SOURCE)/hardware/mtkcam3/include LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_SOURCE)/hardware/mtkcam/include # LOCAL_C_INCLUDES += system/media/camera/include LOCAL_C_INCLUDES += $(TOP)/external/libyuv/files/include/ LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_SOURCE)/hardware/mtkcam3/3rdparty/customer/tp_watermark/include #----------------------------------------------------------- LOCAL_CFLAGS += $(MTKCAM_CFLAGS) # #----------------------------------------------------------- LOCAL_STATIC_LIBRARIES += # LOCAL_WHOLE_STATIC_LIBRARIES += #----------------------------------------------------------- LOCAL_SHARED_LIBRARIES += liblog LOCAL_SHARED_LIBRARIES += libutils LOCAL_SHARED_LIBRARIES += libcutils LOCAL_SHARED_LIBRARIES += libmtkcam_modulehelper LOCAL_SHARED_LIBRARIES += libmtkcam_stdutils LOCAL_SHARED_LIBRARIES += libmtkcam_pipeline LOCAL_SHARED_LIBRARIES += libmtkcam_metadata LOCAL_SHARED_LIBRARIES += libmtkcam_metastore LOCAL_SHARED_LIBRARIES += libmtkcam_streamutils LOCAL_SHARED_LIBRARIES += libmtkcam_imgbuf LOCAL_SHARED_LIBRARIES += libyuv.vendor #----------------------------------------------------------- LOCAL_HEADER_LIBRARIES := libutils_headers liblog_headers libhardware_headers #----------------------------------------------------------- LOCAL_MODULE := libmtkcam.plugin.tp_watermark LOCAL_PROPRIETARY_MODULE := true LOCAL_MODULE_OWNER := mtk LOCAL_MODULE_TAGS := optional include $(MTK_STATIC_LIBRARY) ################################################################################ include $(call all-makefiles-under,$(LOCAL_PATH)) endif
4.2.3 mtkcam3/3rdparty/customer/tp_watermark/WatermarkCapture.cpp
主要函數(shù)介紹:
在property函數(shù)中feature類型設(shè)置我們?cè)诘谌街刑砑拥腡P_FEATURE_WATERMARK,并設(shè)置名稱、優(yōu)先級(jí)等等屬性。
在negotiate函數(shù)中配置算法需要的輸入、輸出圖像的格式、尺寸。
在negotiate函數(shù)或者process函數(shù)中獲取上層傳下來的metadata參數(shù),根據(jù)參數(shù)決定算法是否運(yùn)行,或者將參數(shù)傳給算法。
在process函數(shù)中接入算法。
注意:
MTK原文:
negotiate函數(shù)設(shè)置格式時(shí),一個(gè)掛載點(diǎn)如果掛載多個(gè)同類型的plugin,則只有第一個(gè) plugin 中的 negotiate 中的 input buffer 設(shè)定有效。
在YUVNode 下掛載單幀 YUV plugin時(shí),一定要確保 MTK 平臺(tái)的SWNR plugin 的 negotiate 直接返回不OK,不做任何 accepted format 等的設(shè)定。否則,可能會(huì)出現(xiàn)因 SWNR plugin和三方plugin negotiate時(shí)設(shè)定的 accepted format 不一致而導(dǎo)致的三方 plugin 拿不到它想要的 format 的buffer。
vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/swnr/SWNRImpl.cpp:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/swnr/SWNRImpl.cpp b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/swnr/SWNRImpl.cpp old mode 100644 new mode 100755 index 0ae951cc83..c4819068f7 --- a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/swnr/SWNRImpl.cpp +++ b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/swnr/SWNRImpl.cpp @@ -340,7 +340,7 @@ negotiate(Selection& sel) sel.mOMetadataApp.setRequired(false); sel.mOMetadataHal.setRequired(true); - return OK; + return -EINVAL;//OK; }
vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/tp_watermark/WatermarkCapture.cpp:
#define LOG_TAG "WatermarkCapture" // #include// #include #include #include #include #include // #include #include // // #include // #include #include // #include #include #include #include #include #include #include #include #include #include // using namespace NSCam; using namespace android; using namespace std; using namespace NSCam::NSPipelinePlugin; /****************************************************************************** * ******************************************************************************/ #define MY_LOGV(fmt, arg...) CAM_LOGV("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg) #define MY_LOGD(fmt, arg...) CAM_LOGD("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg) #define MY_LOGI(fmt, arg...) CAM_LOGI("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg) #define MY_LOGW(fmt, arg...) CAM_LOGW("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg) #define MY_LOGE(fmt, arg...) CAM_LOGE("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg) // #define FUNCTION_IN MY_LOGD("%s +", __FUNCTION__) #define FUNCTION_OUT MY_LOGD("%s -", __FUNCTION__) //systrace #if 1 #ifndef ATRACE_TAG #define ATRACE_TAG ATRACE_TAG_CAMERA #endif #include #define WATERMARK_TRACE_CALL() ATRACE_CALL() #define WATERMARK_TRACE_NAME(name) ATRACE_NAME(name) #define WATERMARK_TRACE_BEGIN(name) ATRACE_BEGIN(name) #define WATERMARK_TRACE_END() ATRACE_END() #else #define WATERMARK_TRACE_CALL() #define WATERMARK_TRACE_NAME(name) #define WATERMARK_TRACE_BEGIN(name) #define WATERMARK_TRACE_END() #endif template inline bool tryGetMetadata(IMetadata const *pMetadata, MUINT32 tag, T& rVal) { if(pMetadata == nullptr) return MFALSE; IMetadata::IEntry entry = pMetadata->entryFor(tag); if(!entry.isEmpty()) { rVal = entry.itemAt(0,Type2Type ()); return true; } else { #define var(v) #v #define type(t) #t MY_LOGW("no metadata %s in %s", var(tag), type(pMetadata)); #undef type #undef var } return false; } /****************************************************************************** * ******************************************************************************/ class WatermarkCapture : public YuvPlugin::IProvider { public: typedef YuvPlugin::Property Property; typedef YuvPlugin::Selection Selection; typedef YuvPlugin::Request::Ptr RequestPtr; typedef YuvPlugin::RequestCallback::Ptr RequestCallbackPtr; private: int mOpenid; MBOOL mEnable = 1; MBOOL mDump = 0; unsigned char *mSrcRGBA = nullptr; unsigned char *mWatermarkRGBA = nullptr; int mWatermarkWidth = 0; int mWatermarkHeight = 0; public: WatermarkCapture(); ~WatermarkCapture(); void init(); void uninit(); void abort(vector &pRequests); void set(MINT32 iOpenId, MINT32 iOpenId2); const Property &property(); MERROR negotiate(Selection &sel); MERROR process(RequestPtr pRequest, RequestCallbackPtr pCallback); }; WatermarkCapture::WatermarkCapture() : mOpenid(-1) { FUNCTION_IN; mEnable = property_get_bool("vendor.debug.camera.watermark.capture.enable", 1); mDump = property_get_bool("vendor.debug.camera.watermark.capture.dump", 0); FUNCTION_OUT; } WatermarkCapture::~WatermarkCapture() { FUNCTION_IN; FUNCTION_OUT; } void WatermarkCapture::init() { FUNCTION_IN; mWatermarkWidth = 180; mWatermarkHeight = 640; int watermarkSize = mWatermarkWidth * mWatermarkHeight * 4; mWatermarkRGBA = (unsigned char *) malloc(watermarkSize); FILE *fp; char path[256]; snprintf(path, sizeof(path), "/vendor/res/images/watermark.rgba"); if ((fp = fopen(path, "r")) == NULL) { MY_LOGE("Failed to open /vendor/res/images/watermark.rgba"); } fread(mWatermarkRGBA, 1, watermarkSize, fp); fclose(fp); FUNCTION_OUT; } void WatermarkCapture::uninit() { FUNCTION_IN; free(mWatermarkRGBA); FUNCTION_OUT; } void WatermarkCapture::abort(vector &pRequests) { FUNCTION_IN; (void)pRequests; FUNCTION_OUT; } void WatermarkCapture::set(MINT32 iOpenId, MINT32 iOpenId2) { FUNCTION_IN; MY_LOGD("set openId:%d openId2:%d", iOpenId, iOpenId2); mOpenid = iOpenId; FUNCTION_OUT; } const WatermarkCapture::Property &WatermarkCapture::property() { FUNCTION_IN; static Property prop; static bool inited; if (!inited) { prop.mName = "TP_WATERMARK"; prop.mFeatures = TP_FEATURE_WATERMARK; prop.mInPlace = MTRUE; prop.mFaceData = eFD_Current; prop.mPosition = 0; inited = true; } FUNCTION_OUT; return prop; } MERROR WatermarkCapture::negotiate(Selection &sel) { FUNCTION_IN; if (!mEnable) { MY_LOGD("Force off TP_WATERMARK"); FUNCTION_OUT; return -EINVAL; } sel.mIBufferFull .setRequired(MTRUE) .addAcceptedFormat(eImgFmt_I420) .addAcceptedSize(eImgSize_Full); sel.mIMetadataDynamic.setRequired(MTRUE); sel.mIMetadataApp.setRequired(MTRUE); sel.mIMetadataHal.setRequired(MTRUE); sel.mOMetadataApp.setRequired(MTRUE); sel.mOMetadataHal.setRequired(MTRUE); FUNCTION_OUT; return OK; } MERROR WatermarkCapture::process(RequestPtr pRequest, RequestCallbackPtr pCallback = nullptr) { FUNCTION_IN; WATERMARK_TRACE_CALL(); MBOOL needRun = MFALSE; if (pRequest->mIBufferFull != nullptr && pRequest->mOBufferFull != nullptr) { IImageBuffer *pIBufferFull = pRequest->mIBufferFull->acquire(); IImageBuffer *pOBufferFull = pRequest->mOBufferFull->acquire(); if (pRequest->mIMetadataDynamic != nullptr) { IMetadata *meta = pRequest->mIMetadataDynamic->acquire(); if (meta != NULL) MY_LOGD("[IN] Dynamic metadata count: %d", meta->count()); else MY_LOGD("[IN] Dynamic metadata empty"); } int frameNo = 0, requestNo = 0; if (pRequest->mIMetadataHal != nullptr) { IMetadata *pIMetataHAL = pRequest->mIMetadataHal->acquire(); if (pIMetataHAL != NULL) { MY_LOGD("[IN] HAL metadata count: %d", pIMetataHAL->count()); if (!tryGetMetadata (pIMetataHAL, MTK_PIPELINE_FRAME_NUMBER, frameNo)) { frameNo = 0; } if (!tryGetMetadata (pIMetataHAL, MTK_PIPELINE_REQUEST_NUMBER, requestNo)) { requestNo = 0; } MY_LOGD("frameNo: %d, requestNo: %d", frameNo, requestNo); } else { MY_LOGD("[IN] HAL metadata empty"); } } if (pRequest->mIMetadataApp != nullptr) { IMetadata *pIMetadataApp = pRequest->mIMetadataApp->acquire(); MINT32 mode = 0; if (!tryGetMetadata (pIMetadataApp, QXT_FEATURE_WATERMARK, mode)) { mode = 0; } needRun = mode == 1 ? 1 : 0; } MY_LOGD("needRun: %d", needRun); int width = pIBufferFull->getImgSize().w; int height = pIBufferFull->getImgSize().h; MINT inFormat = pIBufferFull->getImgFormat(); if (needRun && inFormat == NSCam::eImgFmt_I420) { uint32_t currentTime = (NSCam::Utils::TimeTool::getReadableTime()) % 1000; time_t timep; time (&timep); char currentDate[20]; strftime(currentDate, sizeof(currentDate), "%Y%m%d_%H%M%S", localtime(&timep)); //dump input I420 if (mDump) { char path[256]; snprintf(path, sizeof(path), "/data/vendor/camera_dump/capture_in_frame%d_%dx%d_%s_%d.i420", frameNo, width, height, currentDate, currentTime); pIBufferFull->saveToFile(path); } nsecs_t t1 = systemTime(CLOCK_MONOTONIC); if (mSrcRGBA == NULL) { mSrcRGBA = (unsigned char *) malloc(width * height * 4); } //convert I420 to RGBA libyuv::I420ToABGR((unsigned char *) (pIBufferFull->getBufVA(0)), width, (unsigned char *) (pIBufferFull->getBufVA(1)), width >> 1, (unsigned char *) (pIBufferFull->getBufVA(2)), width >> 1, mSrcRGBA, width * 4, width, height); nsecs_t t2 = systemTime(CLOCK_MONOTONIC); MY_LOGD("Prepare src cost %02ld ms", ns2ms(t2 - t1)); Watermark::add(mSrcRGBA, width, height, mWatermarkRGBA, mWatermarkWidth, mWatermarkHeight, (width - mWatermarkWidth) / 2, (height - mWatermarkHeight) / 2); nsecs_t t3 = systemTime(CLOCK_MONOTONIC); MY_LOGD("Add watermark cost %02ld ms", ns2ms(t3 - t2)); //convert RGBA to I420 libyuv::ABGRToI420(mSrcRGBA, width * 4, (unsigned char *) (pOBufferFull->getBufVA(0)), width, (unsigned char *) (pOBufferFull->getBufVA(1)), width >> 1, (unsigned char *) (pOBufferFull->getBufVA(2)), width >> 1, width, height); nsecs_t t4 = systemTime(CLOCK_MONOTONIC); MY_LOGD("Copy in to out cost %02ld ms", ns2ms(t4 - t3)); //dump output I420 if (mDump) { char path[256]; snprintf(path, sizeof(path), "/data/vendor/camera_dump/capture_out_frame%d_%dx%d_%s_%d.i420", frameNo, width, height, currentDate, currentTime); pOBufferFull->saveToFile(path); } free(mSrcRGBA); } else { if (!needRun) { MY_LOGE("No need run, skip add watermark for capture."); } else if (inFormat != NSCam::eImgFmt_YV12) { MY_LOGE("Unsupported format, skip add watermark for capture."); } else { MY_LOGE("Unknown exception, skip add watermark for capture."); } memcpy((unsigned char *) (pOBufferFull->getBufVA(0)), (unsigned char *) (pIBufferFull->getBufVA(0)), pIBufferFull->getBufSizeInBytes(0)); memcpy((unsigned char *) (pOBufferFull->getBufVA(1)), (unsigned char *) (pIBufferFull->getBufVA(1)), pIBufferFull->getBufSizeInBytes(1)); memcpy((unsigned char *) (pOBufferFull->getBufVA(2)), (unsigned char *) (pIBufferFull->getBufVA(2)), pIBufferFull->getBufSizeInBytes(2)); } pRequest->mIBufferFull->release(); pRequest->mOBufferFull->release(); if (pRequest->mIMetadataDynamic != nullptr) { pRequest->mIMetadataDynamic->release(); } if (pRequest->mIMetadataHal != nullptr) { pRequest->mIMetadataHal->release(); } if (pRequest->mIMetadataApp != nullptr) { pRequest->mIMetadataApp->release(); } } if (pCallback != nullptr) { MY_LOGD("callback request"); pCallback->onCompleted(pRequest, 0); } FUNCTION_OUT; return OK; } REGISTER_PLUGIN_PROVIDER(Yuv, WatermarkCapture);
4.2.4 mtkcam3/3rdparty/customer/tp_watermark/WatermarkPreview.cpp
#include#include #include #include #include #include #include #include #include using NSCam::NSPipelinePlugin::Interceptor; using NSCam::NSPipelinePlugin::PipelinePlugin; using NSCam::NSPipelinePlugin::PluginRegister; using NSCam::NSPipelinePlugin::Join; using NSCam::NSPipelinePlugin::JoinPlugin; using namespace NSCam::NSPipelinePlugin; using NSCam::MSize; using NSCam::MERROR; using NSCam::IImageBuffer; using NSCam::IMetadata; using NSCam::Type2Type; #ifdef LOG_TAG #undef LOG_TAG #endif // LOG_TAG #define LOG_TAG "WatermarkPreview" #include #include #define MY_LOGI(fmt, arg...) ALOGI("[%s] " fmt, __FUNCTION__, ##arg) #define MY_LOGD(fmt, arg...) ALOGD("[%s] " fmt, __FUNCTION__, ##arg) #define MY_LOGW(fmt, arg...) ALOGW("[%s] " fmt, __FUNCTION__, ##arg) #define MY_LOGE(fmt, arg...) ALOGE("[%s] " fmt, __FUNCTION__, ##arg) #define FUNCTION_IN MY_LOGD("%s +", __FUNCTION__) #define FUNCTION_OUT MY_LOGD("%s -", __FUNCTION__) template inline bool tryGetMetadata(IMetadata const *pMetadata, MUINT32 tag, T& rVal) { if(pMetadata == nullptr) return MFALSE; IMetadata::IEntry entry = pMetadata->entryFor(tag); if(!entry.isEmpty()) { rVal = entry.itemAt(0,Type2Type ()); return true; } else { #define var(v) #v #define type(t) #t MY_LOGW("no metadata %s in %s", var(tag), type(pMetadata)); #undef type #undef var } return false; } class WatermarkPreview : public JoinPlugin::IProvider { public: typedef JoinPlugin::Property Property; typedef JoinPlugin::Selection Selection; typedef JoinPlugin::Request::Ptr RequestPtr; typedef JoinPlugin::RequestCallback::Ptr RequestCallbackPtr; private: bool mDisponly = false; bool mInplace = false; int mOpenID1 = 0; int mOpenID2 = 0; MBOOL mEnable = 1; MBOOL mDump = 0; unsigned char *mSrcRGBA = nullptr; unsigned char *mWatermarkRGBA = nullptr; int mWatermarkWidth = 0; int mWatermarkHeight = 0; public: WatermarkPreview(); ~WatermarkPreview(); void init(); void uninit(); void abort(std::vector &pRequests); void set(MINT32 openID1, MINT32 openID2); const Property &property(); MERROR negotiate(Selection &sel); MERROR process(RequestPtr pRequest, RequestCallbackPtr pCallback); private: MERROR getConfigSetting(Selection &sel); MERROR getP1Setting(Selection &sel); MERROR getP2Setting(Selection &sel); }; WatermarkPreview::WatermarkPreview() { FUNCTION_IN; mEnable = property_get_bool("vendor.debug.camera.watermark.preview.enable", 1); mDump = property_get_bool("vendor.debug.camera.watermark.preview.dump", 0); FUNCTION_OUT; } WatermarkPreview::~WatermarkPreview() { FUNCTION_IN; FUNCTION_OUT; } void WatermarkPreview::init() { FUNCTION_IN; mWatermarkWidth = 180; mWatermarkHeight = 640; int watermarkSize = mWatermarkWidth * mWatermarkHeight * 4; mWatermarkRGBA = (unsigned char *) malloc(watermarkSize); FILE *fp; char path[256]; snprintf(path, sizeof(path), "/vendor/res/images/watermark.rgba"); if ((fp = fopen(path, "r")) == NULL) { MY_LOGE("Failed to open /vendor/res/images/watermark.rgba"); } fread(mWatermarkRGBA, 1, watermarkSize, fp); fclose(fp); FUNCTION_OUT; } void WatermarkPreview::uninit() { FUNCTION_IN; free(mSrcRGBA); free(mWatermarkRGBA); FUNCTION_OUT; } void WatermarkPreview::abort(std::vector &pRequests) { FUNCTION_IN; (void)pRequests; FUNCTION_OUT; } void WatermarkPreview::set(MINT32 openID1, MINT32 openID2) { FUNCTION_IN; MY_LOGD("set openID1:%d openID2:%d", openID1, openID2); mOpenID1 = openID1; mOpenID2 = openID2; FUNCTION_OUT; } const WatermarkPreview::Property &WatermarkPreview::property() { FUNCTION_IN; static Property prop; static bool inited; if (!inited) { prop.mName = "TP_WATERMARK"; prop.mFeatures = TP_FEATURE_WATERMARK; //prop.mInPlace = MTRUE; //prop.mFaceData = eFD_Current; //prop.mPosition = 0; inited = true; } FUNCTION_OUT; return prop; } MERROR WatermarkPreview::negotiate(Selection &sel) { FUNCTION_IN; MERROR ret = OK; if (sel.mSelStage == eSelStage_CFG) { ret = getConfigSetting(sel); } else if (sel.mSelStage == eSelStage_P1) { ret = getP1Setting(sel); } else if (sel.mSelStage == eSelStage_P2) { ret = getP2Setting(sel); } FUNCTION_OUT; return ret; } MERROR WatermarkPreview::process(RequestPtr pRequest, RequestCallbackPtr pCallback) { FUNCTION_IN; (void) pCallback; MERROR ret = -EINVAL; MBOOL needRun = MFALSE; IImageBuffer *in = NULL, *out = NULL; if (pRequest->mIBufferMain1 != NULL && pRequest->mOBufferMain1 != NULL) { in = pRequest->mIBufferMain1->acquire(); out = pRequest->mOBufferMain1->acquire(); int frameNo = 0, requestNo = 0; if (pRequest->mIMetadataHal1 != nullptr) { IMetadata *pIMetataHAL1 = pRequest->mIMetadataHal1->acquire(); if (pIMetataHAL1 != NULL) { if (!tryGetMetadata (pIMetataHAL1, MTK_PIPELINE_FRAME_NUMBER, frameNo)) { frameNo = 0; } if (!tryGetMetadata (pIMetataHAL1, MTK_PIPELINE_REQUEST_NUMBER, requestNo)) { requestNo = 0; } pRequest->mIMetadataHal1->release(); MY_LOGD("frameNo: %d, requestNo: %d", frameNo, requestNo); } else { MY_LOGD("HAL metadata empty"); } } MY_LOGD("in[%d](%dx%d)=%p out[%d](%dx%d)=%p", in->getPlaneCount(), in->getImgSize().w, in->getImgSize().h, in, out->getPlaneCount(), out->getImgSize().w, out->getImgSize().h, out); if (pRequest->mIMetadataApp != nullptr) { IMetadata *pIMetadataApp = pRequest->mIMetadataApp->acquire(); MINT32 mode = 0; if (!tryGetMetadata (pIMetadataApp, QXT_FEATURE_WATERMARK, mode)) { mode = 0; } needRun = mode == 1 ? 1 : 0; pRequest->mIMetadataApp->release(); } MY_LOGD("needRun: %d", needRun); int width = in->getImgSize().w; int height = in->getImgSize().h; MINT inFormat = in->getImgFormat(); if (needRun && inFormat == NSCam::eImgFmt_YV12) { uint32_t currentTime = (NSCam::Utils::TimeTool::getReadableTime()) % 1000; time_t timep; time (&timep); char currentDate[20]; strftime(currentDate, sizeof(currentDate), "%Y%m%d_%H%M%S", localtime(&timep)); //dump input YV12 if (mDump) { char path[256]; snprintf(path, sizeof(path), "/data/vendor/camera_dump/preview_in_frame%d_%dx%d_%s_%d.yv12", frameNo, width, height, currentDate, currentTime); in->saveToFile(path); } nsecs_t t1 = systemTime(CLOCK_MONOTONIC); if (mSrcRGBA == NULL) { mSrcRGBA = (unsigned char *) malloc(width * height * 4); } //convert YV12 to RGBA libyuv::I420ToABGR((unsigned char *)(in->getBufVA(0)), width, (unsigned char *)(in->getBufVA(2)), width >> 1, (unsigned char *)(in->getBufVA(1)), width >> 1, mSrcRGBA, width * 4, width, height); nsecs_t t2 = systemTime(CLOCK_MONOTONIC); MY_LOGD("Prepare src cost %02ld ms", ns2ms(t2 - t1)); Watermark::add(mSrcRGBA, width, height, mWatermarkRGBA, mWatermarkWidth, mWatermarkHeight, (width - mWatermarkWidth) / 2, (height - mWatermarkHeight) / 2); nsecs_t t3 = systemTime(CLOCK_MONOTONIC); MY_LOGD("Add watermark cost %02ld ms", ns2ms(t3 - t2)); //convert RGBA to YV12 libyuv::ABGRToI420(mSrcRGBA, width * 4, (unsigned char *)(out->getBufVA(0)), width, (unsigned char *)(out->getBufVA(2)), width >> 1, (unsigned char *)(out->getBufVA(1)), width >> 1, width, height); nsecs_t t4 = systemTime(CLOCK_MONOTONIC); MY_LOGD("Copy in to out cost %02ld ms", ns2ms(t4 - t3)); //dump output YV12 if (mDump) { char path[256]; snprintf(path, sizeof(path), "/data/vendor/camera_dump/preview_out_frame%d_%dx%d_%s_%d.yv12", frameNo, width, height, currentDate, currentTime); out->saveToFile(path); } } else { if (!needRun) { MY_LOGE("No need run, skip add watermark for preview."); } else if (inFormat != NSCam::eImgFmt_YV12) { MY_LOGE("Unsupported format, skip add watermark for preview."); } else { MY_LOGE("Unknown exception, skip add watermark for preview."); } memcpy((unsigned char *) (out->getBufVA(0)), (unsigned char *)(in->getBufVA(0)), in->getBufSizeInBytes(0)); memcpy((unsigned char *) (out->getBufVA(1)), (unsigned char *)(in->getBufVA(1)), in->getBufSizeInBytes(1)); memcpy((unsigned char *) (out->getBufVA(2)), (unsigned char *)(in->getBufVA(2)), in->getBufSizeInBytes(2)); } pRequest->mIBufferMain1->release(); pRequest->mOBufferMain1->release(); ret = OK; } FUNCTION_OUT; return ret; } MERROR WatermarkPreview::getConfigSetting(Selection &sel) { MY_LOGI("max out size(%dx%d)", sel.mCfgInfo.mMaxOutSize.w, sel.mCfgInfo.mMaxOutSize.h); mDisponly = property_get_bool("vendor.debug.tpi.s.fb.disponly", 0); mInplace = mDisponly || property_get_bool("vendor.debug.tpi.s.fb.inplace", 0); sel.mCfgOrder = 3; sel.mCfgJoinEntry = eJoinEntry_S_YUV; sel.mCfgInplace = mInplace; sel.mCfgEnableFD = MTRUE; sel.mCfgRun = mEnable; sel.mIBufferMain1.setRequired(MTRUE); if (!mDisponly && property_get_bool("vendor.debug.tpi.s.fb.nv21", 0)) { sel.mIBufferMain1.addAcceptedFormat(NSCam::eImgFmt_NV21); } if (!mDisponly && property_get_bool("vendor.debug.tpi.s.fb.size", 0)) { sel.mIBufferMain1.setSpecifiedSize(sel.mCfgInfo.mMaxOutSize); } sel.mOBufferMain1.setRequired(MTRUE); sel.mIBufferMain1.addAcceptedFormat(NSCam::eImgFmt_YV12); sel.mIBufferMain1.addAcceptedSize(eImgSize_Full); IMetadata *meta = sel.mIMetadataApp.getControl().get(); MY_LOGD("sessionMeta=%p", meta); return OK; } MERROR WatermarkPreview::getP1Setting(Selection &sel) { (void) sel; return OK; } MERROR WatermarkPreview::getP2Setting(Selection &sel) { MBOOL run = MTRUE; sel.mP2Run = run; return OK; } REGISTER_PLUGIN_PROVIDER(Join, WatermarkPreview);
4.2.5 mtkcam3/3rdparty/customer/Android.mk
最終vendor.img需要的目標(biāo)共享庫(kù)是libmtkcam_3rdparty.customer.so。因此,我們還需要修改Android.mk,使模塊libmtkcam_3rdparty.customer依賴libmtkcam.plugin.tp_watermark。vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk old mode 100644 new mode 100755 index ce060c39f9..ff5763d3c2 --- a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk +++ b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk @@ -70,6 +70,13 @@ LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_purebokeh LOCAL_SHARED_LIBRARIES += libcam.iopipe LOCAL_SHARED_LIBRARIES += libmtkcam_modulehelper endif + +ifeq ($(QXT_WATERMARK_SUPPORT), yes) +LOCAL_SHARED_LIBRARIES += libwatermark +LOCAL_SHARED_LIBRARIES += libyuv.vendor +LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_watermark +endif + # for app super night ev decision (experimental for customer only) LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.control.customersupernightevdecision
4.2.6 預(yù)置水印文件
diff --git a/device/mediateksample/k63v2_64_bsp/device.mk b/device/mediateksample/k63v2_64_bsp/device.mk index 2619000c72..048c33462e 100644 --- a/device/mediateksample/k63v2_64_bsp/device.mk +++ b/device/mediateksample/k63v2_64_bsp/device.mk @@ -98,6 +98,9 @@ PRODUCT_COPY_FILES += vendor/mediatek/proprietary/custom/k63v2_64_bsp/factory/re PRODUCT_COPY_FILES += vendor/mediatek/proprietary/custom/k63v2_64_bsp/factory/res/images/lcd_test_01.png:$(TARGET_COPY_OUT_VENDOR)/res/images/lcd_test_01.png:mtk PRODUCT_COPY_FILES += vendor/mediatek/proprietary/custom/k63v2_64_bsp/factory/res/images/lcd_test_02.png:$(TARGET_COPY_OUT_VENDOR)/res/images/lcd_test_02.png:mtk +ifeq ($(QXT_WATERMARK_SUPPORT),yes) +PRODUCT_COPY_FILES += vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/tp_watermark/res/watermark.rgba::$(TARGET_COPY_OUT_VENDOR)/res/images/watermark.rgba +endif # overlay has priorities. high <-> low.
camera hal進(jìn)程為mtk_camera_hal,它要讀取/vendor/res/images/watermark.rgba,讀取需要vendor_file SELinux權(quán)限。這里為mtk_camera_hal配置SELinux權(quán)限:
diff --git a/device/mediatek/sepolicy/bsp/non_plat/mtk_hal_camera.te b/device/mediatek/sepolicy/bsp/non_plat/mtk_hal_camera.te index 8de5d0a437..7ebd9a03e5 100644 --- a/device/mediatek/sepolicy/bsp/non_plat/mtk_hal_camera.te +++ b/device/mediatek/sepolicy/bsp/non_plat/mtk_hal_camera.te @@ -92,6 +92,7 @@ allow mtk_hal_camera sysfs_boot_mode:file { read open }; # Purpose: NDD allow mtk_hal_camera vendor_data_file:dir create_dir_perms; allow mtk_hal_camera vendor_data_file:file create_file_perms; +allow mtk_hal_camera vendor_file:file { read getattr open };
五、自定義metadata
添加metadata是為了讓APP層能夠通過metadata傳遞相應(yīng)的參數(shù)給HAL層。APP層是通過CaptureRequest.Builder.set(@NonNull Key
由于我們是自定義的feature,無法復(fù)用MTK提供的metadata,因此,我們需要自定義metadata。
vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h b/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h index 22d4aa2bf2..b020352092 100755 --- a/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h +++ b/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h @@ -89,6 +89,7 @@ typedef enum mtk_camera_metadata_section { MTK_BGSERVICE_FEATURE = 12, MTK_CONFIGURE_SETTING = 13, MTK_FLASH_FEATURE = 14, + QXT_FEATURE = 15, MTK_VENDOR_SECTION_COUNT, } mtk_camera_metadata_section_t; @@ -146,6 +147,7 @@ typedef enum mtk_camera_metadata_section_start { MTK_CONFIGURE_SETTING_START = (MTK_CONFIGURE_SETTING + MTK_VENDOR_TAG_SECTION) << 16, MTK_FLASH_FEATURE_START = (MTK_FLASH_FEATURE + MTK_VENDOR_TAG_SECTION) << 16, + QXT_FEATURE_START = (QXT_FEATURE + MTK_VENDOR_TAG_SECTION) << 16, } mtk_camera_metadata_section_start_t; @@ -599,6 +601,8 @@ typedef enum mtk_camera_metadata_tag { MTK_FLASH_FEATURE_CALIBRATION_RESULT, // flash calibration result MTK_FLASH_FEATURE_END, + QXT_FEATURE_WATERMARK = QXT_FEATURE_START, + QXT_FEATURE_END, } mtk_camera_metadata_tag_t; /**
vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl b/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl index 15449c433d..1b4fc75a0e 100755 --- a/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl +++ b/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl @@ -91,6 +91,11 @@ _IMP_SECTION_INFO_(MTK_DISTORTION_CORRECTION_INFO, "mtk.distortionCorrection") _IMP_SECTION_INFO_(MTK_IOPIPE_INFO, "mtk.iopipe.info") _IMP_SECTION_INFO_(MTK_HAL_INFO, "mtk.hal.info") +_IMP_SECTION_INFO_(QXT_FEATURE, "com.qxt.camera") + +_IMP_TAG_INFO_( QXT_FEATURE_WATERMARK, + MINT32, "watermark") + /****************************************************************************** * ******************************************************************************
vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h b/vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h index 2481492f90..33e581adfd 100755 --- a/vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h +++ b/vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h @@ -377,6 +377,16 @@ static auto& _FlashFeature_() } +static auto& _QxtFeature_() +{ + static const std::map+ sInst = { + _TAG_(QXT_FEATURE_WATERMARK, + "watermark", TYPE_INT32), + }; + // + return sInst; +} /****************************************************************************** * @@ -460,6 +470,10 @@ static auto& getGlobalSections() MTK_FLASH_FEATURE_END, _FlashFeature_() ), + _SECTION_( "com.qxt.camera", + QXT_FEATURE_START, + QXT_FEATURE_END, + _QxtFeature_() ), }; // append custom vendor tags sections to mtk sections
vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp b/vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp index edd5b5f1b9..591b25b162 100755 --- a/vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp +++ b/vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp @@ -578,6 +578,19 @@ updateData(IMetadata &rMetadata) } } #endif + +#if 1 + { + IMetadata::IEntry qxtAvailRequestEntry = rMetadata.entryFor(MTK_REQUEST_AVAILABLE_REQUEST_KEYS); + qxtAvailRequestEntry.push_back(QXT_FEATURE_WATERMARK , Type2Type< MINT32 >()); + rMetadata.update(qxtAvailRequestEntry.tag(), qxtAvailRequestEntry); + + IMetadata::IEntry qxtAvailSessionEntry = rMetadata.entryFor(MTK_REQUEST_AVAILABLE_SESSION_KEYS); + qxtAvailSessionEntry.push_back(QXT_FEATURE_WATERMARK , Type2Type< MINT32 >()); + rMetadata.update(qxtAvailSessionEntry.tag(), qxtAvailSessionEntry); + } +#endif + // update multi-cam feature mode to static metadata // vendor tag {
前面這些步驟完成之后,集成工作就基本完成了。我們需要重新編譯一下系統(tǒng)源碼,為節(jié)約時(shí)間,也可以只編譯vendor.img。趁著編譯的時(shí)間,我們可以寫一個(gè)demo來驗(yàn)證算法是否集成成功了。
六、APP調(diào)用算法
WatermarkActivity:
public class WatermarkActivity extends BaseActivity { private static final String TAG = WatermarkActivity.class.getSimpleName(); /* * 16:9 picture size: 3840x2160 preview size 1280x720 * 4:3 picture size: 3264x2448 preview size 960x720 * Now is 4:3 */ private static final int PREVIEW_WIDTH = 1280; private static final int PREVIEW_HEIGHT = 720; private static final int CAPTURE_WIDTH = 3264; private static final int CAPTURE_HEIGHT = 2448; private static final String IMAGE_PATH = Environment.getExternalStorageDirectory().getAbsolutePath() + File.separator + "DCIM" + File.separator + "Camera"; private static final String CAMERA_ID = "0"; private static final String KEY_WATERMARK = "com.qxt.camera.watermark"; private static final String SP_NAME = "watermark"; private static final String SP_STATE_KEY = "state"; private AutoFitTextureView mTextureView; private ProgressBar mProgressBar; private Handler mMainHandler; private Handler mCameraHandler; private HandlerThread mCameraHandlerThread; private CameraManager mCameraManager; private CaptureRequest.Builder mPreviewBuilder; private CameraDevice mCameraDevice; private CameraCaptureSession mCameraCaptureSession; private MediaActionSound mCameraSound; private String mTakePictureTime; private SimpleDateFormat mDateFormat = new SimpleDateFormat( "yyyyMMdd_HHmmss", Locale.getDefault()); private ImageReader mCaptureImageReader; private Surface mSurface; public CaptureRequest.KeymVendorKey; private int mVendorKeyEnable; private SharedPreferences mSharedPref; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); getWindow().setFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON, WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON); setContentView(R.layout.activity_watermark); mProgressBar = findViewById(R.id.progressbar); mTextureView = findViewById(R.id.texture); mTextureView.setAspectRatio(PREVIEW_HEIGHT, PREVIEW_WIDTH); mCameraSound = new MediaActionSound(); mCameraSound.load(MediaActionSound.SHUTTER_CLICK); mCameraManager = (CameraManager) getSystemService(Context.CAMERA_SERVICE); mMainHandler = new Handler(); initVendorTag(); mSharedPref = getSharedPreferences(SP_NAME, Context.MODE_PRIVATE); mVendorKeyEnable = mSharedPref.getInt(SP_STATE_KEY, 0); getCameraCharacteristics(CAMERA_ID); } @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.menu_watermark, menu); Switch s = menu.findItem(R.id.action_watermark) .getActionView().findViewById(R.id.switch_watermark); s.setChecked(mVendorKeyEnable > 0); s.setOnCheckedChangeListener(new CompoundButton.OnCheckedChangeListener() { @Override public void onCheckedChanged(CompoundButton btn, boolean isChecked) { if (isChecked) { mVendorKeyEnable = 1; } else { mVendorKeyEnable = 0; } mSharedPref.edit().putInt(SP_STATE_KEY, mVendorKeyEnable).commit(); if (mPreviewBuilder != null && mCameraCaptureSession != null) { try { mCameraCaptureSession.stopRepeating(); setVendorTag(mPreviewBuilder); mCameraCaptureSession.setRepeatingRequest(mPreviewBuilder.build(), mSessionCaptureCallback, mCameraHandler); } catch (CameraAccessException e) { e.printStackTrace(); } } LogUtils.d(TAG, "[onCheckedChanged] isChecked=" + isChecked + ", mWideAngleEnable=" + mVendorKeyEnable); } }); return true; } @Override protected void onResume() { super.onResume(); initLooper(); if (mTextureView.isAvailable()) { openCamera(); } else { mTextureView.setSurfaceTextureListener(mSurfaceTextureListener); } } @Override protected void onPause() { super.onPause(); closeCamera(); stopLooper(); } @Override protected void onDestroy() { super.onDestroy(); } public void onClick(View view) { if (view != null && view.getId() == R.id.btn_capture) { takePicture(); } } private void initLooper() { mCameraHandlerThread = new HandlerThread("WideAngleCamera"); mCameraHandlerThread.start(); mCameraHandler = new Handler(mCameraHandlerThread.getLooper()); } private void stopLooper() { try { mCameraHandlerThread.quit(); mCameraHandlerThread.join(); mCameraHandlerThread = null; mCameraHandler = null; } catch (Exception e) { e.printStackTrace(); } } @SuppressLint("MissingPermission") private void openCamera() { try { mCameraManager.openCamera(CAMERA_ID, new CameraDevice.StateCallback() { @Override public void onOpened(@NonNull CameraDevice camera) { mCameraDevice = camera; createCameraPreviewSession(); } @Override public void onDisconnected(@NonNull CameraDevice camera) { LogUtils.d(TAG, "onDisconnected"); camera.close(); mCameraDevice = null; } @Override public void onError(@NonNull CameraDevice camera, int error) { LogUtils.d(TAG, "onError error=" + error); camera.close(); mCameraDevice = null; } }, mCameraHandler); } catch (CameraAccessException e) { e.printStackTrace(); } } private void closeCamera() { try { if (null != mCameraCaptureSession) { mCameraCaptureSession.close(); mCameraCaptureSession = null; } if (null != mCameraDevice) { mCameraDevice.close(); mCameraDevice = null; } if (null != mCaptureImageReader) { mCaptureImageReader.close(); mCaptureImageReader = null; } } catch (Exception e) { e.printStackTrace(); } } private void createCameraPreviewSession() { if (isFinishing() || isDestroyed() || mCameraDevice == null) { return; } try { mCaptureImageReader = ImageReader.newInstance(CAPTURE_WIDTH, CAPTURE_HEIGHT, ImageFormat.YUV_420_888, 2); mCaptureImageReader.setOnImageAvailableListener( mCaptureOnImageAvailableListener, mCameraHandler); mPreviewBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW); setVendorTag(mPreviewBuilder); mPreviewBuilder.addTarget(mSurface); mCameraDevice.createCaptureSession(Arrays.asList(mSurface, mCaptureImageReader.getSurface()), new CameraCaptureSession.StateCallback() { @Override public void onConfigured(@NonNull CameraCaptureSession session) { if (isFinishing() || isDestroyed() || mCameraDevice == null) { return; } try { mCameraCaptureSession = session; mPreviewBuilder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_AUTO); mPreviewBuilder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE); mCameraCaptureSession.setRepeatingRequest(mPreviewBuilder.build(), mSessionCaptureCallback, mCameraHandler); } catch (CameraAccessException e) { e.printStackTrace(); } } @Override public void onConfigureFailed(@NonNull CameraCaptureSession session) { } }, mCameraHandler); } catch (CameraAccessException e) { e.printStackTrace(); } } private void takePicture() { try { mTakePictureTime = mDateFormat.format(System.currentTimeMillis()); final CaptureRequest.Builder captureBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE); setVendorTag(captureBuilder); Surface surface = mCaptureImageReader.getSurface(); captureBuilder.addTarget(surface); mCameraCaptureSession.capture(captureBuilder.build(), new CameraCaptureSession.CaptureCallback() { @Override public void onCaptureCompleted(@NonNull CameraCaptureSession session, @NonNull CaptureRequest request, @NonNull TotalCaptureResult result) { super.onCaptureCompleted(session, request, result); } }, mCameraHandler); if (mCameraSound != null) { mCameraSound.play(MediaActionSound.SHUTTER_CLICK); } mProgressBar.setVisibility(View.VISIBLE); } catch (Exception e) { e.printStackTrace(); } } private void notifyPictureTaken() { mProgressBar.setVisibility(View.GONE); Toast toast = Toast.makeText(WatermarkActivity.this, getString(R.string.image_saved, IMAGE_PATH), Toast.LENGTH_SHORT); toast.setGravity(Gravity.CENTER, 0, 0); toast.show(); } @SuppressWarnings("unused") private void getCameraCharacteristics(String cameraId) { try { CameraCharacteristics cs = mCameraManager.getCameraCharacteristics(cameraId); StreamConfigurationMap map = cs.get( CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP); if (map != null) { //獲取圖像輸出的尺寸 Size[] pictureSize = map.getOutputSizes(ImageFormat.JPEG); Size[] previewSize = map.getOutputSizes(SurfaceTexture.class); StringBuilder pictureBuilder = new StringBuilder("picture sizes: "); for (Size size : pictureSize) { pictureBuilder.append(size); pictureBuilder.append(", "); } LogUtils.d(TAG, pictureBuilder.toString()); StringBuilder previewBuilder = new StringBuilder("preview sizes: "); for (Size size : previewSize) { previewBuilder.append(size); previewBuilder.append(", "); } LogUtils.d(TAG, previewBuilder.toString()); } } catch (Exception e) { e.printStackTrace(); } } TextureView.SurfaceTextureListener mSurfaceTextureListener = new TextureView.SurfaceTextureListener() { @Override public void onSurfaceTextureAvailable(SurfaceTexture surfaceTexture, int w, int h) { LogUtils.d(TAG, "onSurfaceAvaliable, width:" + w + ", height:" + h); surfaceTexture.setDefaultBufferSize(PREVIEW_WIDTH, PREVIEW_HEIGHT); mSurface = new Surface(surfaceTexture); openCamera(); } @Override public void onSurfaceTextureSizeChanged(SurfaceTexture surfaceTexture, int w, int h) { LogUtils.d(TAG, "onSurfaceTextureSizeChanged, width:" + w + ", height:" + h); } @Override public boolean onSurfaceTextureDestroyed(SurfaceTexture surface) { LogUtils.d(TAG, "onSurfaceTextureDestroyed"); mSurface = null; return false; } @Override public void onSurfaceTextureUpdated(SurfaceTexture surface) { } }; private final CameraCaptureSession.CaptureCallback mSessionCaptureCallback = new CameraCaptureSession.CaptureCallback() { @Override public void onCaptureCompleted(@NonNull CameraCaptureSession session, @NonNull CaptureRequest request, @NonNull TotalCaptureResult result) { super.onCaptureCompleted(session, request, result); mCameraCaptureSession = session; } }; private final ImageReader.OnImageAvailableListener mCaptureOnImageAvailableListener = new ImageReader.OnImageAvailableListener() { @Override public void onImageAvailable(final ImageReader reader) { LogUtils.d(TAG, "capture onImageAvailable"); Image image = reader.acquireLatestImage(); if (image == null) return; ImageUtils.saveImage(WatermarkActivity.this, image, IMAGE_PATH, "WIDE_" + mTakePictureTime, ImageUtils.ROTATE_90); image.close(); LogUtils.d(TAG, "saved"); mMainHandler.post(new Runnable() { @Override public void run() { notifyPictureTaken(); } }); } }; private void initVendorTag() { try { CameraCharacteristics c = mCameraManager.getCameraCharacteristics(CAMERA_ID); mVendorKey = CameraUtils.getSessionKey(c, KEY_WATERMARK); } catch (CameraAccessException e) { e.printStackTrace(); } } private void setVendorTag(CaptureRequest.Builder builder) { if (mVendorKey != null) { builder.set(mVendorKey, new int[]{mVendorKeyEnable}); LogUtils.d(TAG, "[setVendorTag] set watermark to " + mVendorKeyEnable); } } }
CameraUtils:
public class CameraUtils { private static final String TAG = CameraUtils.class.getSimpleName(); @RequiresApi(api = Build.VERSION_CODES.P) public static CaptureRequest.KeygetSessionKey( CameraCharacteristics cs, String key) { if (cs == null) { LogUtils.i(TAG, "[getSessionKey] CameraCharacteristics is null"); return null; } CaptureRequest.Key targetKey = null; List > sessionKeys = cs.getAvailableSessionKeys(); if (sessionKeys == null) { LogUtils.i(TAG, "[getSessionKey] No keys!"); return null; } for (CaptureRequest.Key> sessionKey : sessionKeys) { if (sessionKey.getName().equals(key)) { LogUtils.i(TAG, "[getSessionKey] key :" + key); targetKey = (CaptureRequest.Key ) sessionKey; break; } } return targetKey; } }
七、遇到的問題及解決方法
問題1:
如果process函數(shù)中buffer的acquire和release沒有成對(duì)出現(xiàn),也就是buffer沒正常release,那么就會(huì)出現(xiàn)連續(xù)拍多張之后,算法未被調(diào)用的情況。
問題1解決方法:
YUVNode.cpp中加入一個(gè)保險(xiǎn)的代碼,萬一集成代碼中忘記release,在YUVNode中release。
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/nodes/YUVNode.cpp b/vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/nodes/YUVNode.cpp index 8bb794ba02..d4343aaccf 100755 --- a/vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/nodes/YUVNode.cpp +++ b/vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/nodes/YUVNode.cpp @@ -1050,9 +1051,11 @@ MBOOL YUVNode::onRequestProcess(RequestPtr& pRequest) auto pPlgRequest = mPlugin->createRequest(); - pPlgRequest->mIBufferFull = (iBufferFullHandle == NULL) ? PluginHelper::CreateBuffer(pNodeReq, TID_MAN_FULL_YUV, INPUT) : iBufferFullHandle; + //pPlgRequest->mIBufferFull = (iBufferFullHandle == NULL) ? PluginHelper::CreateBuffer(pNodeReq, TID_MAN_FULL_YUV, INPUT) : iBufferFullHandle; + pPlgRequest->mIBufferFull = (iBufferFullHandle == NULL) ? PluginHelper::CreateBuffer(pNodeReq, TID_MAN_FULL_YUV, INPUT) : std::move(iBufferFullHandle); pPlgRequest->mIBufferClean = PluginHelper::CreateBuffer(pNodeReq, TID_MAN_FULL_PURE_YUV, INPUT); - pPlgRequest->mOBufferFull = (oBufferFullHandle == NULL) ? PluginHelper::CreateBuffer(pNodeReq, TID_MAN_FULL_YUV, OUTPUT) : oBufferFullHandle; + //pPlgRequest->mOBufferFull = (oBufferFullHandle == NULL) ? PluginHelper::CreateBuffer(pNodeReq, TID_MAN_FULL_YUV, OUTPUT) : oBufferFullHandle; + pPlgRequest->mOBufferFull = (oBufferFullHandle == NULL) ? PluginHelper::CreateBuffer(pNodeReq, TID_MAN_FULL_YUV, OUTPUT) : std::move(oBufferFullHandle); pPlgRequest->mIMetadataDynamic = PluginHelper::CreateMetadata(pNodeReq, MID_MAN_IN_P1_DYNAMIC); pPlgRequest->mIMetadataApp = PluginHelper::CreateMetadata(pNodeReq, MID_MAN_IN_APP);
問題2:
算法需要RGB數(shù)據(jù),HAL層是YUV數(shù)據(jù),使用openGL和各類RGB轉(zhuǎn)換公式進(jìn)行YUV和RGB互轉(zhuǎn)后,最終照片有色差。
問題2解決方法:
使用libyuv進(jìn)行轉(zhuǎn)換,libyuv轉(zhuǎn)換效率非常高,經(jīng)測(cè)試,libyuv比公式法和opencv都要快,并且沒有色差。android源碼本身已集成libyuv,使用起來也非常方便。
Android.mk:
LOCAL_C_INCLUDES += $(TOP)/external/libyuv/files/include/ LOCAL_SHARED_LIBRARIES += libyuv.vendor
如不清楚libyuv的使用,請(qǐng)參考本人的另外一篇文章:YUV420轉(zhuǎn)RGBA之使用libyuv
審核編輯:黃飛
-
圖像處理
+關(guān)注
關(guān)注
27文章
1282瀏覽量
56656 -
算法
+關(guān)注
關(guān)注
23文章
4601瀏覽量
92671 -
攝像頭
+關(guān)注
關(guān)注
59文章
4814瀏覽量
95474 -
Camera
+關(guān)注
關(guān)注
0文章
79瀏覽量
20788
原文標(biāo)題:camera算法集成實(shí)現(xiàn)流程
文章出處:【微信號(hào):哆啦安全,微信公眾號(hào):哆啦安全】歡迎添加關(guān)注!文章轉(zhuǎn)載請(qǐng)注明出處。
發(fā)布評(píng)論請(qǐng)先 登錄
相關(guān)推薦
評(píng)論