自 2025 年 3 月 27 日起,我們建議您使用 android-latest-release
而非 aosp-main
建構及貢獻 AOSP。詳情請參閱「Android 開放原始碼計畫變更」。
模型執行緒
透過集合功能整理內容
你可以依據偏好儲存及分類內容。
標示為 oneway
的方法不會封鎖。對於未標示為 oneway
的方法,用戶端的方法呼叫會阻斷,直到伺服器完成執行或呼叫同步回呼 (以先發生者為準)。伺服器方法實作最多可呼叫一個同步回呼;其他回呼呼叫會遭到捨棄,並記錄為錯誤。如果方法應透過回呼傳回值,但未呼叫回呼,系統會將此記錄為錯誤,並向用戶端回報為傳輸錯誤。
直通模式中的執行緒
在直通模式中,大多數呼叫都是同步的。不過,為了維持 oneway
呼叫不會封鎖用戶端的預期行為,系統會為每個程序建立一個執行緒。詳情請參閱 HIDL 總覽。
繫結器化 HAL 中的執行緒
為了處理傳入的 RPC 呼叫 (包括從 HAL 傳回至 HAL 使用者的非同步回呼) 和死亡通知,每個使用 HIDL 的程序都會與執行緒集區建立關聯。如果單一程序實作多個 HIDL 介面和/或死亡通知處理程序,則其執行緒集區會在所有程序之間共用。當程序收到來自用戶端的傳入方法呼叫時,會從執行緒集區挑選空閒的執行緒,並在該執行緒上執行呼叫。如果沒有可用的空閒執行緒,則會阻斷,直到有可用的執行緒為止。
如果伺服器只有一個執行緒,則對伺服器的呼叫會依序完成。即使用戶端只有一個執行緒,擁有多個執行緒的伺服器可能會以錯誤順序完成呼叫。不過,針對特定介面物件,系統會保證 oneway
呼叫有順序 (請參閱「伺服器執行緒模型」)。對於代管多個介面的多執行緒伺服器,對不同介面的 oneway
呼叫可能會與其他阻斷呼叫同時處理。
系統會在同一個 hwbinder 執行緒上傳送多個巢狀呼叫。舉例來說,如果程序 (A) 從 hwbinder 執行緒發出同步呼叫至程序 (B),然後程序 (B) 發出同步回呼至程序 (A),則呼叫會在 (A) 中的原始 hwbinder 執行緒上執行,而該執行緒會在原始呼叫上遭到封鎖。這項最佳化功能可讓單執行緒伺服器處理巢狀呼叫,但不適用於呼叫透過另一個 IPC 呼叫序列傳送的情況。舉例來說,如果程序 (B) 發出 binder/vndbinder 呼叫,並呼叫至程序 (C),然後程序 (C) 回呼至 (A),則無法在 (A) 的原始執行緒上提供服務。
伺服器執行緒模型
除了直通模式外,HIDL 介面的伺服器實作會在與用戶端不同的程序中執行,因此需要一或多個執行緒等待傳入的方法呼叫。這些執行緒是伺服器的執行緒集區;伺服器可以決定要在執行緒集區中執行的執行緒數量,並使用大小為 1 的執行緒集區來序列化其介面上的所有呼叫。如果伺服器的執行緒集區中有一個以上的執行緒,則可在任何介面上接收並行傳入的呼叫 (在 C++ 中,這表示共用資料必須小心上鎖)。
對同一個介面的單向呼叫會序列化。如果多執行緒用戶端在介面 IFoo
上呼叫 method1
和 method2
,以及在介面 IBar
上呼叫 method3
,method1
和 method2
一律會序列化,但 method3
可以與 method1
和 method2
並行執行。
單一執行的用戶端執行緒可能會在具有多個執行緒的伺服器上以兩種方式造成並行執行:
oneway
通話不會封鎖。如果執行 oneway
呼叫,然後再呼叫非 oneway
,伺服器可以同時執行 oneway
呼叫和非 oneway
呼叫。
- 透過同步回呼傳回資料的伺服器方法,可在從伺服器呼叫回呼後立即解除用戶端的封鎖。
在第二種方法中,在回呼呼叫後執行的伺服器函式中的任何程式碼都能同時執行,伺服器會處理來自用戶端的後續呼叫。這包括伺服器函式中的程式碼,以及在函式結尾執行的自動析構函式。如果伺服器的執行緒集區中有多個執行緒,即使只有單一用戶端執行緒傳入呼叫,也會發生並行問題。(如果任何由程序提供的 HAL 需要多個執行緒,所有 HAL 都會有多個執行緒,因為執行緒集區是每個程序共用。)
只要伺服器呼叫提供的回呼,傳輸程序就能在用戶端上呼叫已實作的回呼,並解除用戶端的封鎖。用戶端會在呼叫回呼後,與伺服器執行的任何作業並行處理 (可能包括執行析構函式)。在回呼之後,伺服器函式中的程式碼不會再阻斷用戶端 (只要伺服器執行緒集區有足夠的執行緒來處理傳入的呼叫),但可能會與用戶端的未來呼叫同時執行 (除非伺服器執行緒集區只有一個執行緒)。
除了同步回呼之外,單執行緒用戶端的 oneway
呼叫也可以由執行緒集區中有多個執行緒的伺服器同時處理,但前提是這些 oneway
呼叫必須在不同的介面上執行。同一個介面的 oneway
呼叫一律會序列化。
注意:強烈建議您讓伺服器函式在呼叫回呼函式後立即傳回。
例如 (在 C++ 中):
Return<void> someMethod(someMethod_cb _cb) {
// Do some processing, then call callback with return data
hidl_vec<uint32_t> vec = ...
_cb(vec);
// At this point, the client's callback is called,
// and the client resumes execution.
...
return Void(); // is basically a no-op
};
用戶端執行緒模型
在用戶端上,非阻斷式呼叫 (標示為 oneway
關鍵字的函式) 和阻斷式呼叫 (未指定 oneway
關鍵字的函式) 的執行緒模型有所不同。
封鎖來電
對於封鎖呼叫,用戶端會封鎖,直到發生下列任一情況為止:
- 發生傳輸錯誤;
Return
物件包含可透過 Return::isOk()
擷取的錯誤狀態。
- 伺服器實作項目會呼叫回呼 (如有)。
- 伺服器實作會傳回值 (如果沒有回呼參數)。
如果成功,伺服器一律會在函式本身傳回之前,先呼叫用戶端以引數形式傳遞的回呼函式。回呼會在函式呼叫所在的執行緒上執行,因此實作者必須小心在函式呼叫期間保留鎖定 (並盡可能完全避免鎖定)。沒有 generates
陳述式或 oneway
關鍵字的函式仍會封鎖;用戶端會封鎖,直到伺服器傳回 Return<void>
物件為止。
單向通話
當函式標示為 oneway
時,用戶端會立即傳回,不會等待伺服器完成函式呼叫的叫用。從表面 (和匯總) 來看,這表示函式呼叫只會執行一半的程式碼,因此所需時間會減少一半,但在編寫效能敏感的實作項目時,這會對排程造成影響。一般來說,使用單向呼叫會導致呼叫端繼續排程,而使用一般同步呼叫會導致排程器立即從呼叫端轉移至被呼叫端程序。這是 Binder 中的效能最佳化功能。如果服務必須以高優先順序在目標程序中執行單向呼叫,則可變更接收服務的排程政策。在 C++ 中,使用 libhidltransport
的方法 setMinSchedulerPolicy
搭配 sched.h
中定義的排程器優先順序和政策,可確保服務的所有呼叫至少以設定的排程政策和優先順序執行。
這個頁面中的內容和程式碼範例均受《內容授權》中的授權所規範。Java 與 OpenJDK 是 Oracle 和/或其關係企業的商標或註冊商標。
上次更新時間:2025-07-27 (世界標準時間)。
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["缺少我需要的資訊","missingTheInformationINeed","thumb-down"],["過於複雜/步驟過多","tooComplicatedTooManySteps","thumb-down"],["過時","outOfDate","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["示例/程式碼問題","samplesCodeIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-07-27 (世界標準時間)。"],[],[],null,["# Model threading\n\nMethods marked as `oneway` don't block. For methods not marked as\n`oneway`, a client's method call blocks until the server has\ncompleted execution or called a synchronous callback (whichever comes first).\nServer method implementations can call at most one synchronous callback; extra\ncallback calls are discarded and logged as errors. If a method is supposed to\nreturn values via callback and doesn't call its callback, this is logged as an\nerror and reported as a transport error to the client.\n\nThreads in passthrough mode\n---------------------------\n\nIn passthrough mode, most calls are synchronous. However, to preserve the\nintended behavior that `oneway` calls don't block the client, a\nthread is created for each process. For details, see the\n[HIDL overview](/docs/core/architecture/hidl#passthrough).\n\nThreads in binderized HALs\n--------------------------\n\nTo serve incoming RPC calls (including asynchronous callbacks from HALs to\nHAL users) and death notifications, a threadpool is associated with each process\nthat uses HIDL. If a single process implements multiple HIDL interfaces and/or\ndeath notification handlers, its threadpool is shared between all of them. When\na process receives an incoming method call from a client, it picks a free thread\nfrom the threadpool and executes the call on that thread. If no free thread is\navailable, it blocks until one is available.\n\nIf the server has only one thread, then calls into the server are completed\nin order. A server with more than one thread might complete calls out of order\neven if the client has only one thread. However, for a given interface object,\n`oneway` calls are guaranteed to be ordered (see\n[Server threading model](#model)). For a multi-threaded server that\nhosts multiple interfaces, `oneway` calls to different interfaces\nmight be processed concurrently with each other or other blocking calls.\n\nMultiple nested calls are sent on the same hwbinder thread. For instance,\nif a process (A) makes a synchronous call from a hwbinder thread into process (B),\nand then process (B) makes a synchronous call back into process (A), the call is\nexecuted on the original hwbinder thread in (A) which is blocked on the original\ncall. This optimization makes it possible to have a single threaded server able to\nhandle nested calls, but it doesn't extend to cases where the calls travel through\nanother sequence of IPC calls. For instance, if process (B) had made a\nbinder/vndbinder call which called into a process (C) and then process (C) calls\nback into (A), it can't be served on the original thread in (A).\n\nServer threading model\n----------------------\n\nExcept for passthrough mode, server implementations of HIDL interfaces live\nin a different process than the client and need one or more threads waiting for\nincoming method calls. These threads are the server's threadpool; the server can\ndecide how many threads it wants running in its threadpool, and can use a\nthreadpool size of one to serialize all calls on its interfaces. If the server\nhas more than one thread in the threadpool, it can receive concurrent incoming\ncalls on any of its interfaces (in C++, this means that shared data must be\ncarefully locked).\n\nOne-way calls into the same interface are serialized. If a multi-threaded\nclient calls `method1` and `method2` on interface\n`IFoo`, and `method3` on interface `IBar`,\n`method1` and `method2` is always serialized, but\n`method3` can run in parallel with `method1` and\n`method2`.\n\nA single client thread of execution can cause concurrent execution on a\nserver with multiple threads in two ways:\n\n- `oneway` calls don't block. If a `oneway` call is executed and then a non-`oneway` is called, the server can execute the `oneway` call and the non-`oneway` call simultaneously.\n- Server methods that pass data back with synchronous callbacks can unblock the client as soon as the callback is called from the server.\n\nFor the second way, any code in the server function that executes after the\ncallback is called can execute concurrently, with the server handling subsequent\ncalls from the client. This includes code in the server function and automatic\ndestructors that execute at the end of the function. If the server has more than\none thread in its threadpool, concurrency issues arise even if calls are coming\nin from only one single client thread. (If any HAL served by a process needs\nmultiple threads, all HALs have multiple threads because the threadpool is\nshared per-process.)\n\nAs soon as the server calls the provided callback, the transport can call the\nimplemented callback on the client and unblock the client. The client proceeds\nin parallel with whatever the server implementation does after it calls the\ncallback (which might include running destructors). Code in the server function\nafter the callback is no longer blocking the client (as long as the server\nthreadpool has enough threads to handle incoming calls), but might be executed\nconcurrently with future calls from the client (unless the server threadpool has\nonly one thread).\n\nIn addition to synchronous callbacks, `oneway` calls from a\nsingle-threaded client can be handled concurrently by a server with multiple\nthreads in its threadpool, but only if those `oneway` calls are\nexecuted on different interfaces. `oneway` calls on the same\ninterface are always serialized.\n\n**Note:** We strongly encourage server functions to\nreturn as soon as they have called the callback function.\n\nFor example (in C++): \n\n```scdoc\nReturn\u003cvoid\u003e someMethod(someMethod_cb _cb) {\n // Do some processing, then call callback with return data\n hidl_vec\u003cuint32_t\u003e vec = ...\n _cb(vec);\n // At this point, the client's callback is called,\n // and the client resumes execution.\n ...\n return Void(); // is basically a no-op\n};\n```\n\nClient threading model\n----------------------\n\nThe threading model on the client differs between non-blocking calls\n(functions that are marked with the `oneway` keyword) and blocking\ncalls (functions that don't have the `oneway` keyword specified).\n\n### Block calls\n\nFor blocking calls, the client blocks until one of the following happens:\n\n- Transport error occurs; the `Return` object contains an error state that can be retrieved with `Return::isOk()`.\n- Server implementation calls the callback (if there was one).\n- Server implementation returns a value (if there was no callback parameter).\n\nIn case of success, the callback function the client passes as an argument is\nalways called by the server before the function itself returns. The callback is\nexecuted on the same thread that the function call is made on, so implementers\nmust be careful with holding locks during function calls (and avoid them\naltogether when possible). A function without a `generates` statement\nor a `oneway` keyword is still blocking; the client blocks until the\nserver returns a `Return\u003cvoid\u003e` object.\n\n### One-way calls\n\nWhen a function is marked `oneway`, the client returns immediately\nand doesn't wait for the server to complete its function call invocation. At the\nsurface (and in aggregate), this means the function call takes half the\ntime because it is executing half the code, but when writing implementations that\nare performance sensitive, this has some scheduling implications. Normally,\nusing a one-way call causes the caller to continue to be scheduled whereas\nusing a normal synchronous call causes the scheduler to immediately transfer\nfrom the caller to the callee process. This is a performance optimization in\nbinder. For services where the one-way call must be executed in the target process\nwith a high priority, the scheduling policy of the receiving service can be\nchanged. In C++, using `libhidltransport`'s method\n`setMinSchedulerPolicy` with the scheduler priorities and policies\ndefined in `sched.h` ensures that all calls into the service run at\nleast at the set scheduling policy and priority."]]