自 2025 年 3 月 27 日起,我们建议您使用 android-latest-release
而非 aosp-main
构建 AOSP 并为其做出贡献。如需了解详情,请参阅 AOSP 的变更。
模型线程
使用集合让一切井井有条
根据您的偏好保存内容并对其进行分类。
标记为 oneway
的方法不会阻塞。对于未标记为 oneway
的方法,在服务器完成执行任务或调用同步回调(以先发生者为准)之前,客户端的方法调用将一直处于阻塞状态。服务器方法实现最多可以调用一个同步回调;多出的回调调用会被舍弃并记录为错误。如果方法应通过回调返回值,但未调用其回调,系统会将这种情况记录为错误,并作为传输错误报告给客户端。
直通模式下的线程
在直通模式下,大多数调用都是同步的。不过,为确保 oneway
调用不会阻塞客户端这一预期行为,系统会分别为每个进程创建线程。如需了解详情,请参阅 HIDL 概览。
绑定式 HAL 中的线程
为了处理传入的 RPC 调用(包括从 HAL 到 HAL 用户的异步回调)和终止通知,系统会为使用 HIDL 的每个进程关联一个线程池。如果单个进程实现了多个 HIDL 接口和/或终止通知处理程序,所有这些接口和/或处理程序就会共享其线程池。当进程接收从客户端传入的方法调用时,它会从线程池中选择一个空闲线程,并在该线程上执行调用。如果没有空闲的线程,它将会阻塞,直到有可用线程为止。
如果服务器只有一个线程,则传入服务器的调用将按顺序完成。具有多个线程的服务器可能会不按顺序完成调用,即使客户端只有 1 个线程也是如此。不过,对于给定的接口对象,oneway
调用会保证按顺序完成(请参阅服务器线程模型)。对于托管多个接口的多线程服务器,对不同接口的 oneway
调用可能会并发处理,也可以与其他阻塞调用并发处理。
系统会在同一个 hwbinder 线程中发送多个嵌套调用。例如,如果进程 (A) 通过 hwbinder 线程对进程 (B) 进行同步调用,然后进程 (B) 对进程 (A) 进行同步回调,则系统会在 (A) 中的原始 hwbinder 线程(在原始调用中已被屏蔽)上执行该调用。这种优化使单个线程服务器能够处理嵌套调用,但是对于需要在其他 IPC 调用序列中传输调用的情况,这种优化并不适用。例如,如果进程 (B) 进行了 binder/vndbinder 调用,并在此过程中调用了进程 (C),然后进程 (C) 回调进程 (A),则系统无法在进程 (A) 中的原始线程上处理该调用。
服务器线程模型
(直通模式除外)HIDL 接口的服务器实现位于不同于客户端的进程中,并且需要一个或多个线程等待传入的方法调用。这些线程构成服务器的线程池;服务器可以决定它希望在其线程池中运行多少线程,并且可以利用一个线程大小的线程池来按顺序处理其接口上的所有调用。如果服务器的线程池中有多个线程,则服务器可以在其任何接口上接收同时传入的调用(在 C++ 中,这意味着必须小心锁定共享数据)。
传入同一接口的单向调用会按顺序进行处理。如果多线程客户端在接口 IFoo
上调用 method1
和 method2
,并在接口 IBar
上调用 method3
,method1
和 method2
将始终按顺序运行,但 method3
可以与 method1
和 method2
并行运行。
单一客户端执行线程可能会通过以下两种方式在具有多个线程的服务器上引发并发执行:
oneway
调用不会阻塞。如果先执行 oneway
调用,然后再调用非 oneway
,服务器就可以同时执行 oneway
调用和非 oneway
调用。
- 一旦从服务器调用回调,通过同步回调传回数据的服务器方法就可以解除对客户端的阻塞。
对于第二种方式,在调用回调之后执行的服务器函数中的任何代码都可以并行运行,同时服务器会处理来自客户端的后续调用。这包括服务器函数中的代码,以及在函数结束时执行的自动析构函数中的代码。如果服务器的线程池中有多个线程,那么即使仅从一个单一客户端线程传入调用,也会出现并行处理问题。(如果一个进程提供的任意 HAL 需要多个线程,则所有 HAL 都将具有多个线程,因为线程池是按进程共享的。)
当服务器调用所提供的回调时,transport 可以立即调用客户端上已实现的回调,并解除对客户端的阻塞。客户端会继续与服务器实现在调用回调之后所执行的任何任务(可能包括正在运行的析构函数)并行运行。回调后,服务器函数中的代码不会再阻塞客户端(但前提是服务器线程池中有足够多的线程来处理传入的调用),但可能会与来自客户端的未来调用并发执行(除非服务器线程池中只有一个线程)。
除了同步回调外,来自单线程客户端的 oneway
调用也可以由线程池中有多个线程的服务器并发处理,但前提是要在不同的接口上执行这些 oneway
调用。同一接口上的 oneway
调用一律按顺序处理。
注意:我们强烈建议服务器函数在调用回调函数后立即返回。
例如(在 C++ 中):
Return<void> someMethod(someMethod_cb _cb) {
// Do some processing, then call callback with return data
hidl_vec<uint32_t> vec = ...
_cb(vec);
// At this point, the client's callback is called,
// and the client resumes execution.
...
return Void(); // is basically a no-op
};
客户端线程模型
非阻塞调用(带有 oneway
关键字标记的函数)与阻塞调用(未指定 oneway
关键字的函数)的客户端线程模型有所不同。
禁止通话
对于阻塞调用来说,除非发生以下情况之一,否则客户端将一直处于阻塞状态:
- 出现传输错误;
Return
对象包含可通过 Return::isOk()
检索的错误状态。
- 服务器实现调用回调(如果有)。
- 服务器实现返回值(如果没有回调参数)。
如果成功的话,客户端以参数形式传递的回调函数始终会在函数本身返回之前被服务器调用。回调是在进行函数调用的同一线程上执行,所以在函数调用期间,实现人员必须谨慎地持有锁(并尽可能彻底避免持有锁)。不含 generates
语句或 oneway
关键字的函数仍处于阻塞状态;在服务器返回 Return<void>
对象之前,客户端将一直处于阻塞状态。
单向调用
如果某个函数标记有 oneway
,客户端就会立即返回,而不会等待服务器完成其函数调用。从表面(整体)上看,这意味着函数调用只用了一半的时间,因为它执行了一半的代码,但是当编写性能敏感型实现时,这会带来一些调度方面的影响。通常,使用单向调用会导致调用方继续被调度,而使用正常的同步调用会使调度方立即从调用方转移到被调用方进程。这就是 binder 中的性能优化。对于必须在具有高优先级的目标进程中执行单向调用的服务,可以更改接收服务的调度政策。在 C++ 中,可以将 libhidltransport
的 setMinSchedulerPolicy
方法与 sched.h
中定义的调度程序优先级和政策一起使用,这样可确保传入服务的所有调用至少按设置的调度政策和优先级运行。
本页面上的内容和代码示例受内容许可部分所述许可的限制。Java 和 OpenJDK 是 Oracle 和/或其关联公司的注册商标。
最后更新时间 (UTC):2025-07-27。
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["没有我需要的信息","missingTheInformationINeed","thumb-down"],["太复杂/步骤太多","tooComplicatedTooManySteps","thumb-down"],["内容需要更新","outOfDate","thumb-down"],["翻译问题","translationIssue","thumb-down"],["示例/代码问题","samplesCodeIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-07-27。"],[],[],null,["# Model threading\n\nMethods marked as `oneway` don't block. For methods not marked as\n`oneway`, a client's method call blocks until the server has\ncompleted execution or called a synchronous callback (whichever comes first).\nServer method implementations can call at most one synchronous callback; extra\ncallback calls are discarded and logged as errors. If a method is supposed to\nreturn values via callback and doesn't call its callback, this is logged as an\nerror and reported as a transport error to the client.\n\nThreads in passthrough mode\n---------------------------\n\nIn passthrough mode, most calls are synchronous. However, to preserve the\nintended behavior that `oneway` calls don't block the client, a\nthread is created for each process. For details, see the\n[HIDL overview](/docs/core/architecture/hidl#passthrough).\n\nThreads in binderized HALs\n--------------------------\n\nTo serve incoming RPC calls (including asynchronous callbacks from HALs to\nHAL users) and death notifications, a threadpool is associated with each process\nthat uses HIDL. If a single process implements multiple HIDL interfaces and/or\ndeath notification handlers, its threadpool is shared between all of them. When\na process receives an incoming method call from a client, it picks a free thread\nfrom the threadpool and executes the call on that thread. If no free thread is\navailable, it blocks until one is available.\n\nIf the server has only one thread, then calls into the server are completed\nin order. A server with more than one thread might complete calls out of order\neven if the client has only one thread. However, for a given interface object,\n`oneway` calls are guaranteed to be ordered (see\n[Server threading model](#model)). For a multi-threaded server that\nhosts multiple interfaces, `oneway` calls to different interfaces\nmight be processed concurrently with each other or other blocking calls.\n\nMultiple nested calls are sent on the same hwbinder thread. For instance,\nif a process (A) makes a synchronous call from a hwbinder thread into process (B),\nand then process (B) makes a synchronous call back into process (A), the call is\nexecuted on the original hwbinder thread in (A) which is blocked on the original\ncall. This optimization makes it possible to have a single threaded server able to\nhandle nested calls, but it doesn't extend to cases where the calls travel through\nanother sequence of IPC calls. For instance, if process (B) had made a\nbinder/vndbinder call which called into a process (C) and then process (C) calls\nback into (A), it can't be served on the original thread in (A).\n\nServer threading model\n----------------------\n\nExcept for passthrough mode, server implementations of HIDL interfaces live\nin a different process than the client and need one or more threads waiting for\nincoming method calls. These threads are the server's threadpool; the server can\ndecide how many threads it wants running in its threadpool, and can use a\nthreadpool size of one to serialize all calls on its interfaces. If the server\nhas more than one thread in the threadpool, it can receive concurrent incoming\ncalls on any of its interfaces (in C++, this means that shared data must be\ncarefully locked).\n\nOne-way calls into the same interface are serialized. If a multi-threaded\nclient calls `method1` and `method2` on interface\n`IFoo`, and `method3` on interface `IBar`,\n`method1` and `method2` is always serialized, but\n`method3` can run in parallel with `method1` and\n`method2`.\n\nA single client thread of execution can cause concurrent execution on a\nserver with multiple threads in two ways:\n\n- `oneway` calls don't block. If a `oneway` call is executed and then a non-`oneway` is called, the server can execute the `oneway` call and the non-`oneway` call simultaneously.\n- Server methods that pass data back with synchronous callbacks can unblock the client as soon as the callback is called from the server.\n\nFor the second way, any code in the server function that executes after the\ncallback is called can execute concurrently, with the server handling subsequent\ncalls from the client. This includes code in the server function and automatic\ndestructors that execute at the end of the function. If the server has more than\none thread in its threadpool, concurrency issues arise even if calls are coming\nin from only one single client thread. (If any HAL served by a process needs\nmultiple threads, all HALs have multiple threads because the threadpool is\nshared per-process.)\n\nAs soon as the server calls the provided callback, the transport can call the\nimplemented callback on the client and unblock the client. The client proceeds\nin parallel with whatever the server implementation does after it calls the\ncallback (which might include running destructors). Code in the server function\nafter the callback is no longer blocking the client (as long as the server\nthreadpool has enough threads to handle incoming calls), but might be executed\nconcurrently with future calls from the client (unless the server threadpool has\nonly one thread).\n\nIn addition to synchronous callbacks, `oneway` calls from a\nsingle-threaded client can be handled concurrently by a server with multiple\nthreads in its threadpool, but only if those `oneway` calls are\nexecuted on different interfaces. `oneway` calls on the same\ninterface are always serialized.\n\n**Note:** We strongly encourage server functions to\nreturn as soon as they have called the callback function.\n\nFor example (in C++): \n\n```scdoc\nReturn\u003cvoid\u003e someMethod(someMethod_cb _cb) {\n // Do some processing, then call callback with return data\n hidl_vec\u003cuint32_t\u003e vec = ...\n _cb(vec);\n // At this point, the client's callback is called,\n // and the client resumes execution.\n ...\n return Void(); // is basically a no-op\n};\n```\n\nClient threading model\n----------------------\n\nThe threading model on the client differs between non-blocking calls\n(functions that are marked with the `oneway` keyword) and blocking\ncalls (functions that don't have the `oneway` keyword specified).\n\n### Block calls\n\nFor blocking calls, the client blocks until one of the following happens:\n\n- Transport error occurs; the `Return` object contains an error state that can be retrieved with `Return::isOk()`.\n- Server implementation calls the callback (if there was one).\n- Server implementation returns a value (if there was no callback parameter).\n\nIn case of success, the callback function the client passes as an argument is\nalways called by the server before the function itself returns. The callback is\nexecuted on the same thread that the function call is made on, so implementers\nmust be careful with holding locks during function calls (and avoid them\naltogether when possible). A function without a `generates` statement\nor a `oneway` keyword is still blocking; the client blocks until the\nserver returns a `Return\u003cvoid\u003e` object.\n\n### One-way calls\n\nWhen a function is marked `oneway`, the client returns immediately\nand doesn't wait for the server to complete its function call invocation. At the\nsurface (and in aggregate), this means the function call takes half the\ntime because it is executing half the code, but when writing implementations that\nare performance sensitive, this has some scheduling implications. Normally,\nusing a one-way call causes the caller to continue to be scheduled whereas\nusing a normal synchronous call causes the scheduler to immediately transfer\nfrom the caller to the callee process. This is a performance optimization in\nbinder. For services where the one-way call must be executed in the target process\nwith a high priority, the scheduling policy of the receiving service can be\nchanged. In C++, using `libhidltransport`'s method\n`setMinSchedulerPolicy` with the scheduler priorities and policies\ndefined in `sched.h` ensures that all calls into the service run at\nleast at the set scheduling policy and priority."]]